top of page
translation.png

Responsible AI Research

Translating AI into clinical practice remains a huge barrier for researchers to overcome. This is due to many reasons such as the lack of clinical data for validation, biased datasets, and privacy issues with sensitive patient data.

 

DPUK has worked on developing governance, training and tools to enable researchers to develop responsible AI which can be safely translated into real world practice. We recently received funding from DARE UK to set up an AI Risk Evaluation Group to investigate the issues of using sensitive healthcare data in training AI models within Trusted Research Environments (TREs), and how they can be mitigated and assessed.  

​

This has enabled us to develop a comprehensive governance framework for enabling the development and deployment of AI models trained on healthcare data within TREs. Scan the QR code below to read our report.

Picture19.png

Contact Lewis Hotchkiss to learn more

lewis.hotchkiss@chi.swan.ac.uk

risk-logo.png
adobe-express-qr-code (8).png

Scan this QR Code to read the framework report

adobe-express-qr-code (6).png

Scan this QR Code to sign up to the AI Risk
Community Group

Training

We have also developed training material for researchers to understand the privacy risks of developing AI models on sensitive healthcare data and how to appropriately mitigate these.

adobe-express-qr-code (9).png

Scan this QR Code to sign up to the AI Risk Training Course

training_edited.png
bottom of page