In the already narrow space for influencing the international AI agenda, the UK’s AI Safety Summit achieved as much as it could.
This content was originally published for clients. Find out more about our Emerging Tech services.
Although scaled down from Sunak’s initial ambitions, the AI Safety Summit achieved commitments from both the US and China to share AI threats, a national (more so than international) initiative on AI safety testing, and the continuation of the Summit in 2024.
The Summit consolidated the narrative of the catastrophic and security risks of frontier AI. This narrative is set to develop, for example, with a focus on the use of AI in dual-use goods.
Developments in the run-up to and during the Summit demonstrate that the AI governance space is only becoming more competitive. The rise of the US’s influence will squeeze the ability of the UK and other smaller nations to shape the agenda.
The Summit has now been established as yet another piece in the jigsaw of AI governance. Its purpose going forward may be to maintain a US-China dialogue as it serves tech-ambitious nations. However, the bigger debates on areas such as trade, intellectual property, and privacy will likely continue to contribute to an East-West rift.
The UK had a good Summit
As we wrote in the lead-up to the Summit, the UK’s definition of “success” was getting consensus from a broader group of countries on the risks associated with frontier AI, as well as a commitment to mitigate them.
In this context, the UK had a good Summit. The so-called “Bletchley Declaration” was signed on Wednesday by 28 nations, including one vote for the EU. This is the first time that China and the US have agreed in the area of AI governance and creates a narrow avenue for trust in the otherwise strained US-China trade relationship. If the entente cordiale lasts, it will be a British diplomatic victory.
The communiqué sets out three commitments: first, to share information on the risks of frontier AI (in a similar way to the existing threat-sharing mechanisms in cybersecurity); second, to develop national as well as international AI governance mechanisms; and third, to support “an internationally inclusive network of scientific research on frontier AI safety”.
Of these, the threat-sharing commitment is likely to become the most tangible. It was also a focus on Commission President Ursula von der Leyen’s address at the Summit. Regulatory coordination, on the other hand, is much further off, given many of the signatories – including the UK and EU – already have divergent approaches to regulating AI.
The second tangible deliverable of the Summit is the UK Safety Institute. Sunak announced last week that the UK’s Frontier Model Taskforce will evolve to become a permanent Institute, with Ian Hogarth – the current head of the Taskforce – continuing in post. The aim is to create public sector capability for testing foundation models before they are placed on the market and once they are operational.
In terms of computing power, the UK’s commitment to the Institute is sizeable – a new AI Research Resource. This £300 million network includes some of Europe’s supercomputers and priority access to two new supercomputers in Bristol and Cambridge. However, the international appeal of the Institute has been lukewarm so far.
The initiative was also overshadowed by the White House’s announcement of a US AI Safety Institute on the morning of the first day of the Summit. Although the two Institutes have committed to work together, this is a largely symbolic gesture. It is far more likely that they will focus on servicing domestic foundation model providers and those of the countries’ geopolitical spheres of influence – which will give the US Institute a more prominent role.
The Summit outcomes are a scaled-down version of Sunak’s ambition to establish an international research body modelled on the Intergovernmental Panel on Climate Change (IPCC). Still, they create a parallel model for US-China engagement.
AI as a security concern
The Summit debates were wide-ranging, covering issues from AI impact on labour to the treatment of open source. Underpinning them was the agreement that AI could create catastrophic risks and national security issues. This is a distinctive US and UK-driven narrative and one that is set to expand on the international agenda.
To that end, a top priority of the Safety Institute will be to assess the dual-use capabilities of complex AI systems.
We are likely to see other international structures pick up this narrative.
Another piece in the jigsaw
The Bletchley Park 100 (attendees of the Summit) also committed to the longevity of the format. As early as the opening speech, it was announced that it would be continued biannually, with South Korea as the next host in six months, followed by France in a year.
Like the UK, these are moderate-sized countries with an outsized tech sector. As such, the Summit format is likely to be used as a way for tech-focused countries to assert their voice more strongly than they would otherwise be able to in bigger, more established US-influenced multilateral fora such as the G7 or UN.
No formal mandate was given to the next AI Safety Summits. Therefore, the tension between this cooperation format and other international initiatives will continue, driven by the ambition of the participating nations to set themselves apart as AI leaders. The number of US announcements this week only fuels that competitive ambition.
In parallel, the UN High-level AI Advisory Board will also deliver a leadership meeting in November next year, and the G7 Hiroshima Process is set to evolve to further work under Italy’s presidency in 2024. The UN process, in particular, will also seek to ensure that China’s participation in the international debate is secured, at least on the baseline level of scientific knowledge exchange.
Whether via the Safety Summits, the UN process, or otherwise, China plans to pivot towards a more open foreign policy engagement with the West. This will be met with US suspicion.
In this context, the Summit in South Korea will be successful if it can extend the parallel track of truly international cooperation. What it likely will not discuss is the much more sensitive issue of trade in AI and related hardware, such as semiconductors. For this reason, its overall impact on the international stage will be restricted.
Conclusion
This has been a fraught few months for the UK and an incredibly busy week for AI governance. In the end, the Summit has achieved about as much as the UK could have hoped for, with China’s participation and a commitment to future international collaboration secured.
The Summit has also triggered a flurry of AI initiatives from individual countries, indicating that competition for leading international AI governance is only increasing.
Looking ahead, the most important issue to watch in the global governance space will be the rise of the US’s influence in the AI agenda and the implications this could have for China’s nascent participation in the debate.
Forefront Advisers’ Emerging Tech service assesses and anticipates how governments and policymakers in the EU and UK are adapting to drastic technological change, including AI and quantum computing. See how it can support you.