top of page
Developing Artificial Intelligence in Neuroscience which is Ethical, Fair, Private & Robust.
Ensuring AI is developed which is fair and unbiased.
Diversity in training data is crucial to ensuring that AI models are developed which do not discriminate. AI models developed in the UK mainly focus on data collections from white populations in Europe, leaving behind those ethnicities which are underrepresented. This is why we need to ensure that data is available from multiple ethnicities to build better representative AI.
Preserving privacy within AI models to protect patient data
AI models in neuroscience are often trained on highly protected data collections which are accessed either via hospitals, universities or Trusted Research Environments (TRE's). When models are released from these environments, then they could be susceptible to attack and therefore potentially revealing disclosive patient information. This is why it is important that we ensure peoples data is kept private within AI.
Developing robust models for real-world data
AI is only useful when it actually works on real-world data. However, it can be challenging to develop AI models when data is restricted to specific population data collections. In order to create robust models, we need enough data to cover diverse population demographics and incorporate multi-modal data to ensure robustness in the models decisions.
Promoting sustainability in processing and development
Processing large complex data such as neuroimaging and genomics can require lots of computational resources to generate the processed data. Not only this but the training and development of AI often requires the use of HPC clusters which can have a comparatively high environmental impact. Encorporating sustainability in neuroscience research can reduce this impact.
Latest News
bottom of page