The pace with which AI has visibly demonstrated its capability is nothing but surreal. No matter the industry that you work in, AI is bound to be part of your work conversation. Subsequently, it has also spilled over to our dining tables. But with the wonders that generative AI has brought to us, it is equally capable of disasters. This is not solely because of the inherent capabilities of the technology itself, since in the past there have been numerous innovations that had comparable capability to affect human life.
However, the adverse ability of the past technologies was largely controlled because state institutions were more or less capable of gauging the significance of its impact on society and thus formulating frameworks to mitigate its risk. But with AI’s extraordinary rapid growth and expansive domain, formulating a regulatory framework presents a huge challenge.
Unprecedented Executive Order
Recognizing this urgency, we witnessed US President Joe Biden sign an executive order (EO or the Order) on Safe, Secure, and Trustworthy Artificial Intelligence (AI) to advance a coordinated, federal governmentwide approach toward the safe and responsible development of AI. In the past there have been executive orders that addressed some aspects of technology, such as IT management, cybersecurity, and critical infrastructure, but they have never focused on a specific technology like AI.
In a similar manner the first AI safety summit was hosted by the UK in Bletchley – the place which was basecamp for second world war codebreakers – Those attending included the US vice-president, Kamala Harris, the European Commission president, Ursula von der Leyen, award-winning computer scientists, executives at all the leading AI companies – and Elon Musk.
Bletchley Summit for AI Regulation
The summit recognised multiple risk, emphasizing particular safety risks arising at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models.
The Bletchley Declaration
As per the Bletchley Declaration, the agenda for addressing frontier AI risk will focus on:
- Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
- Building respective risk-based policies across our countries (participating countries) to ensure safety in light of such risks, collaborating as appropriate while recognizing our approaches may differ based on national circumstances and applicable legal frameworks. This includes alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.
The government agencies have set a lofty goal, and they have a history of being unreliable and inefficient. On the other hand, the prompt action is remarkable and motivating. Therefore, stakeholders need to make sure that efforts are focused in the right directions and benefit the general population without obstructing the advancement of technology.