I recently attended some events around the AI Safety Summit which aimed at looking at the wider issue of regulation and standards for the development of responsible AI. It was really interesting to hear the many different panel discussions on these issues and what is currently being done to ensure safe AI. I’ll summarise these panel discussions in the following sections to showcase the breadth of work which has been underway and what the future holds.

Importance of Standards and Regulation
Standards and regulations play a pivotal role in guiding the development and deployment of artificial intelligence. They provide a framework for ensuring safety, ethics, and accountability in the rapidly evolving field of AI. There were many organisations represented at the panel discussion, one of which was the Alan Turing Institute and the Ada Lovelace Institute, both of which have been instrumental in guiding AI governance and standards. The AI Standards Hub, led by the Alan Turing Institute, has curated a database of standards across various sectors serving as a centralised repository to help researchers in accessing and understanding the standards relevant to their work. This is really important as there are many standards which are being developed and it can be confusing trying to find relevant ones to your work.
The British Standards Institute, which creates and maintains some of these standards, was also present. Its contributions play a really important role in the UK for setting benchmarks for responsible AI development and deployment. They highlighted some of the work which is already underway for creating standards and recommendations for the development of AI. The independent regulator Ofcom also made important points on what is happening already in regulating AI on social media platforms and search engines. The important aspect of Ofcom is that it is independent from government and has advisory boards with people from a range of backgrounds. As pointed out, we need to ensure that standards and regulation are not developed which suit the need of the government and big organisations, as so far many organisations have been left to self-regulate, but they always create loopholes for themselves. This is why we need to ensure that AI is developed safely for everyone and is properly regulated in an independent manner.
All of these organisations stressed the importance of international collaboration. In an interconnected world, this is crucial for setting standards where AI developed in one country can significantly impact people from all around the world. Collaborative efforts between countries and organisations ensure that standards are globally applicable and reflective of diverse perspectives, thus promoting a unified approach to AI safety.
The main attacks on regulation/standards have been that they stifle innovation. However, contrary to that misconception they can actually serve to bolster it. By providing a clear framework and guidelines, standards allow researchers to be confident in the AI that they develop. They encourage the development of cutting-edge AI technologies while ensuring they adhere to ethical and safety considerations. Drawing parallels with the aviation industry highlights the positive impact of regulation. The stringent standards and regulations in aviation have not hindered progress; rather, they have been instrumental in ensuring safety, reliability, and the thriving growth of the industry. By ensuring planes didn’t drop out of the sky all of the time, regulations created public trust in this mode of transport which is so widely used today. The same needs to happen in AI.
Importance of Public Engagement
A crucial point which comes up time and again is the importance of involving the public during the development stage of AI models and also at the regulation stage to ensure that AI is being developed in their best interest. Engaging the public in discussions about AI is crucial for building trust, addressing concerns, and ensuring that the technology benefits society at large.
This is vital as it democratises the conversation about AI and empowers individuals from various backgrounds to have a say in shaping the future of technology that will inevitably impact their lives. However, the public's perception of AI is often shaped by the latest breakthroughs and applications. For example, the excitement around advancements recently in Large Language Models has greatly influenced public opinion, and narrowly focuses the conversation away from more pressing issues. AI has been in development for decades but now with the public tools like ChatGPT, this has opened AI up into the public domain. But this means that this is what people think of when it comes to AI when in reality there are a plethora of other types of AI and this can really affect the discussion of these issues. This is something that has to be addressed and providing more awareness of the types of AI is really important to opening up the conversation. One of the panelists mentioned how public knowledge of AI has actually decreased with the addition of ChatGPT as it so narrowly focuses their view.

It was also pointed out that attempts to simplify and explain AI models to the general public can severely affect the way people view AI. Anthropomorphic analogies (such as comparing AI to a toddler learning how to do something), while intuitive, can be misleading. Portraying AI as overly intelligent or human-like can create unrealistic expectations and misconceptions about its capabilities. The other important aspect is diverse representation in public engagement efforts. Considering a wide range of perspectives, including age, ethnicity, and socioeconomic background, ensures that AI development takes into account the varied needs and values of different communities.
Developing methods for Evaluating and auditing safety in AI
Ensuring the safety and reliability of AI models is really importance which is why robust evaluation and auditing processes are essential for identifying and mitigating potential risks associated with AI models. This includes detecting biases, uncovering vulnerabilities to adversarial attacks, and ensuring compliance with ethical and legal standards. Most organisations carry out some sort of evaluation of their AI models, however these evaluations and benchmarks are just down to the organisation to decide and run. Organisation-generated benchmarks may inadvertently reflect the biases and perspectives of the organisation, potentially leading to skewed assessments of AI performance.
Internal benchmarks may not capture the full range of real-world scenarios and use cases which can create a real gap between the expected performance and the actual capabilities of the AI system. Take ChatGPT, for example, which showed that it could pass law and medical examinations, many people then rushed to the conclusion that they could use this technology in practice and it would work amazingly. But in reality this can be dangerous, yet the evaluations from the company set high expectations for the users. Without external validation, there is a risk of overestimating the effectiveness of an AI model. This can result in unrealistic expectations and potential disappointment when the model encounters novel challenges.
External benchmarks created by a diverse community of experts can encompass a wider range of evaluation criteria. This helps to ensure that AI models are assessed from various angles, considering different perspectives and potential challenges. Community-driven benchmarks can help establish standardised evaluation practices across the field, fostering a shared understanding of what constitutes as safe and effective AI, benefiting the entire industry.
Why It's Important to Have This Conversation and Why It's Happening Now
The discussion surrounding AI safety is more crucial than ever, and its prominence in public discourse has surged in recent years. AI development has been underway for decades, with significant progress made in research and application. However, it was largely confined to academic and specialised industry circles, operating in relative obscurity. Only recently has AI development come to the forefront of public awareness. Advancements in technology, coupled with high-profile applications in areas like natural language processing and computer vision, have propelled AI into mainstream consciousness.
AI technologies have become increasingly integrated into our daily lives, from virtual assistants to recommendation systems. This widespread integration has raised awareness about the potential impact of AI on society and as AI systems become more powerful and capable, concerns about their ethical and social implications have grown. Questions surrounding bias, accountability, and the potential for harm have ignited public interest and demand for responsible AI development.
Comments