top of page

Funding for a DARE UK AI Risk Evaluation Community Group

Congratulations on securing the DARE Community group funding, can you tell us in a few words why you applied for this funding?

The motivation behind applying for this funding comes from the increasing development of Artificial Intelligence (AI) on complex healthcare data such as neuroimaging and genomics, which often requires training on protected patient data. This data is typically contained in secure environments such as hospitals, Trusted Research Environments (TREs), or universities, all of which maintain strict security and governance protocols. As we are now starting to see the promising shift of these models being developed for research to being implemented into clinical practice, we have had to start to consider how we can ensure patient privacy is preserved in these AI models when they are transferred from these protected environments. TREs like the Dementias Platform UK (DPUK) Data Portal have been leading the way on providing neuroimaging and genomic data for the development of AI, however, TREs are entrusted with the responsibility of keeping their data collections safe. Since AI models can be vulnerable to certain types of attack and data leakage, we need to carefully consider these risks and how we can develop recommendations and guidelines to mitigate them.


Wonderful, can you tell us a bit more about what you have planned for the community groups?

So our first workshop will bring together members of the public and patients to get an idea of what they think about the use of AI in clinical practice which is trained on protected patient data and what they think the potential risks are. Then we will be holding a researcher workshop to discuss the various privacy-preservation techniques that can be employed to mitigate those risks and what might the barriers be to implementing these. We will also have a data provider workshop to discuss with the investigators who collect these datasets the various risks and mitigations around the use of AI on the data that they share. And finally, we will have a workshop to bring together all of our findings and develop recommendations and guidelines for the development of AI in trusted environments.


What are your anticipated outcomes for this work?

From these workshops we hope to gain a deeper understanding of what the actual risks are to developing AI on protected data and what we can do to prevent these. From this we will create guidelines and recommendations which will take into consideration the views of a diverse range of people so that we can allow AI to be safely developed. We also hope to just generally raise awareness of these concerns and what researchers can be doing to ensure that the AI they develop preserves the privacy of the patient data used to train and develop them.


What aspect are you most excited about?

I am most excited about hearing the views from the public and engaging with them to make sure they understand how AI is actually being used for good and the promises that it can bring to healthcare. Public/Patient engagement is going to really be key for ensuring that AI is implemented in a safe and responsible way. They are ultimately the ones who contribute their data for this kind of AI development and are the ones who will have these AI tools used on them so we need to ensure that we work closely with them.


What do you think will be the biggest challenge with this work?

This community group will bring together people with completely different backgrounds, opinions and expertise so it is going to be challenging trying to find a common set of guidelines which takes into consideration these diverse views. These different stakeholders will have different levels of awareness, understanding, and acceptance of AI technologies in healthcare and their implications for privacy, so being able to communicate effectively with each group is going to be really important. This will be a fully interdisciplinary and collaborative approach bringing together AI researchers, clinicians, and the public so we will need to ensure that everyone has the necessary background knowledge to be able to discuss these important concerns in AI.


Following completion of this work – what are your future aspirations?

From the recommendations and guidelines developed through these workshops, we will work on implementing new solutions within the DPUK data portal to assist researchers in developing safe AI on protected data. We also hope to have a lot more public engagement to showcase the benefits of AI and help the public understand what it actually is so that they are not as worried and reluctant about the implementation of AI. But equally important is ensuring that researchers are aware of the risks associated with the AI that they develop and to help them build skills for making sure they know how to implement effective mitigations.


https://dareuk.org.uk/dare-uk-community-working-groups/dare-uk-community-working-group-ai-risk-evaluation-working-group/

0 comments

Comments


bottom of page