top of page

BLOG

Governance & Regulation

Is Current AI Governance Neglecting Traditional AI?

Lewis Hotchkiss, Research Officer

Lewis Hotchkiss, Research Officer

It is hard to imagine that just 2 years ago, before the release of OpenAI’s ChatGPT 3, that Artificial Intelligence (AI) was a phrase uttered by a very select few who worked in the field. 

​

Now, 2 years on, the rush to use and implement AI has gone into overdrive, with every company now trying to find ways to use this technology, and every government trying to find ways to regulate it.

​

AI has truly captured the world’s attention, igniting debates about potential benefits and risks, sparking a frenzy of interest and new opportunities.

 

But this captivation isn't for AI at all, but rather for “Foundation Models”, a specific subset of AI.
 

Foundation models, such as ChatGPT or Google Gemini, are a form of generative AI models trained on massive amounts of data to perform a broad range of tasks, which we are now seeing implemented into anything and everything.


However, while foundation models are undeniably powerful, they represent just one aspect of AI. Many other AI techniques, often referred to as "traditional AI" or “narrow AI” have been quietly powering critical applications for years.


These types of models are designed to learn from a specific dataset to make very specific predictions, this is contrary to foundation models which are trained on vast amounts of data which are able to make general predictions. 


Voice assistants like Alexa, recommendation systems in Netflix or Spotify, your email spam detection, or Google's search algorithm all use forms of traditional AI which have been trained for a particular purpose. 


However, these models have been quietly sitting in the background of your devices for years, long before the release of ChatGPT, and pose just as many risks. So why is it only now that we are discussing the regulation of AI models?


I am an AI researcher in the field of healthcare, and the development of AI models is nothing new. We have been training AI models on health data to develop diagnostic tools, optimise drug discovery, and personalise treatment plans for years, and we are now starting to see the potential for these systems to be implemented into actual practice. 


These advancements mainly rely on these traditional AI techniques, and while these systems have shown remarkable efficacy, they also carry inherent risks, such as algorithmic bias, data privacy concerns, and the potential for errors with severe consequences. But traditional AI has been left behind when it comes to governance and guidance.


Although I am grateful for ChatGPT bringing AI into public and policymaker discourse, it has brought a danger that no one ever talks about, and that's the hyper-fixation of foundation models when it comes to the regulatory and governance landscape.

​

Recently, most reports and policy initiatives have been dedicated to understanding and mitigating the risks associated with foundation models, from misinformation to job displacement. ​


This is undoubtedly essential. Yet, these discussions often overshadow the many ways in which traditional AI algorithms are already shaping our lives, and the potential risks they may pose.

​

Even the UK's AI Safety Summit which brought together policymakers and AI companies to discuss the advancements, opportunities and risks of AI, only focused on foundation models. 


Concerns about privacy, bias, and accountability apply equally to both types of AI, but much of the current regulatory discourse and guidance is focused on the unique challenges posed by foundation models.


While these types of models deserve attention due to their potential impact, policymakers and regulators must not lose sight of the broader AI landscape. By focusing solely on foundation models, we risk neglecting the critical issues surrounding other AI applications and hindering innovation in vital areas like healthcare which currently lack the guidance needed.


To ensure that AI benefits society as a whole, governance and guidance is required that addresses the risks and opportunities presented by all types of AI, not just by foundation models. This includes developing robust standards for data privacy, algorithmic fairness, and human oversight, as well as developing guidance on mitigating risks associated with different AI techniques. Only by taking a holistic approach can we enable the safe development and implementation of all types of models. 


We must not let foundation models blind us to the wider issues and risks in AI.
 

bottom of page