Key takeaways from the AI Safety Summit

The UK’s AI Safety Summit has achieved as much as was realistically possible. Here, we look at the aftermath of the Summit and where it leaves the global AI governance agenda.
Forefront Advisers writing up the AI Summit, with AI machines

In the already narrow space for influencing the international AI agenda, the UK’s AI Safety Summit achieved as much as it could.

Although scaled down from Sunak’s initial ambitions, the AI Safety Summit achieved commitments from both the US and China to share AI threats, a national (more so than international) initiative on AI safety testing, and the continuation of the Summit in 2024.

The Summit consolidated the narrative of the catastrophic and security risks of frontier AI. This narrative is set to develop, for example, with a focus on the use of AI in dual-use goods.

Developments in the run-up to and during the Summit demonstrate that the AI governance space is only becoming more competitive. The rise of the US’s influence will squeeze the ability of the UK and other smaller nations to shape the agenda.

The Summit has now been established as yet another piece in the jigsaw of AI governance. Its purpose going forward may be to maintain a US-China dialogue as it serves tech-ambitious nations. However, the bigger debates on areas such as trade, intellectual property, and privacy will likely continue to contribute to an East-West rift.

As we wrote in the lead-up to the Summit, the UK’s definition of “success” was getting consensus from a broader group of countries on the risks associated with frontier AI, as well as a commitment to mitigate them.

In this context, the UK had a good Summit. The so-called “Bletchley Declaration” was signed on Wednesday by 28 nations, including one vote for the EU. This is the first time that China and the US have agreed in the area of AI governance and creates a narrow avenue for trust in the otherwise strained US-China trade relationship. If the entente cordiale lasts, it will be a British diplomatic victory.

The communiqué sets out three commitments: first, to share information on the risks of frontier AI (in a similar way to the existing threat-sharing mechanisms in cybersecurity); second, to develop national as well as international AI governance mechanisms; and third, to support “an internationally inclusive network of scientific research on frontier AI safety”.

Of these, the threat-sharing commitment is likely to become the most tangible. It was also a focus on Commission President Ursula von der Leyen’s address at the Summit. Regulatory coordination, on the other hand, is much further off, given many of the signatories – including the UK and EU – already have divergent approaches to regulating AI.

The second tangible deliverable of the Summit is the UK Safety Institute. Sunak announced last week that the UK’s Frontier Model Taskforce will evolve to become a permanent Institute, with Ian Hogarth – the current head of the Taskforce – continuing in post. The aim is to create public sector capability for testing foundation models before they are placed on the market and once they are operational.

In terms of computing power, the UK’s commitment to the Institute is sizeable – a new AI Research Resource. This £300 million network includes some of Europe’s supercomputers and priority access to two new supercomputers in Bristol and Cambridge. However, the international appeal of the Institute has been lukewarm so far.

The initiative was also overshadowed by the White House’s announcement of a US AI Safety Institute on the morning of the first day of the Summit. Although the two Institutes have committed to work together, this is a largely symbolic gesture. It is far more likely that they will focus on servicing domestic foundation model providers and those of the countries’ geopolitical spheres of influence – which will give the US Institute a more prominent role.

The Summit outcomes are a scaled-down version of Sunak’s ambition to establish an international research body modelled on the Intergovernmental Panel on Climate Change (IPCC). Still, they create a parallel model for US-China engagement.

The Summit debates were wide-ranging, covering issues from AI impact on labour to the treatment of open source. Underpinning them was the agreement that AI could create catastrophic risks and national security issues. This is a distinctive US and UK-driven narrative and one that is set to expand on the international agenda.

To that end, a top priority of the Safety Institute will be to assess the dual-use capabilities of complex AI systems.

We are likely to see other international structures pick up this narrative.

The Bletchley Park 100 (attendees of the Summit) also committed to the longevity of the format. As early as the opening speech, it was announced that it would be continued biannually, with South Korea as the next host in six months, followed by France in a year.

Like the UK, these are moderate-sized countries with an outsized tech sector. As such, the Summit format is likely to be used as a way for tech-focused countries to assert their voice more strongly than they would otherwise be able to in bigger, more established US-influenced multilateral fora such as the G7 or UN.

No formal mandate was given to the next AI Safety Summits. Therefore, the tension between this cooperation format and other international initiatives will continue, driven by the ambition of the participating nations to set themselves apart as AI leaders. The number of US announcements this week only fuels that competitive ambition.

In parallel, the UN High-level AI Advisory Board will also deliver a leadership meeting in November next year, and the G7 Hiroshima Process is set to evolve to further work under Italy’s presidency in 2024. The UN process, in particular, will also seek to ensure that China’s participation in the international debate is secured, at least on the baseline level of scientific knowledge exchange.

Whether via the Safety Summits, the UN process, or otherwise, China plans to pivot towards a more open foreign policy engagement with the West. This will be met with US suspicion.

In this context, the Summit in South Korea will be successful if it can extend the parallel track of truly international cooperation. What it likely will not discuss is the much more sensitive issue of trade in AI and related hardware, such as semiconductors. For this reason, its overall impact on the international stage will be restricted.

This has been a fraught few months for the UK and an incredibly busy week for AI governance. In the end, the Summit has achieved about as much as the UK could have hoped for, with China’s participation and a commitment to future international collaboration secured.

The Summit has also triggered a flurry of AI initiatives from individual countries, indicating that competition for leading international AI governance is only increasing.

Looking ahead, the most important issue to watch in the global governance space will be the rise of the US’s influence in the AI agenda and the implications this could have for China’s nascent participation in the debate.

Forefront Advisers’ Emerging Tech service assesses and anticipates how governments and policymakers in the EU and UK are adapting to drastic technological change, including AI and quantum computing. See how it can support you.

Share Post:

Stay Connected

More Updates

More Posts

Get in touch

Fill out the form below, and we will be in touch shortly.

Forefront Advisers Limited, 154-160 Fleet Street, London, EC4A 2DQ, registered in England and Wales, no. 13248974

Scroll to Top

Conor Sewell

Director

Conor works across Forefront’s digital assets, emerging technology and UK political teams. He joined Forefront Advisers from Flint Global, where he worked on financial services and digital assets across the UK, EU and Asia-Pacific. He previously worked at HM Treasury on the UK’s future relationship with the EU and within the Bank of England’s Capital Markets Division. He holds a master’s degree in mathematics from the University of Oxford.

Ramona Visenescu

Associate Director

Ramona is an Associate Director focusing on sustainable finance and circular economy. Ramona previously worked in Brussels at Teneo, where she also covered ESG legislative priorities and interned at the European Commission in DG Economy and Finance. She earned her Bachelor degree in International Relations and European Studies from the University of Bucharest and completed an Advanced Master in Financial Markets at the Solvay Brussels School of Economics & Management.

Pietro Candia

Analyst

Pietro works across EU Politics and is based in London. Before joining Forefront, he interned in other political risk advisory firms and worked in the government relations division of a major oil corporation. He holds a Bachelor’s degree in International Politics from Georgetown University and a Master’s degree in European and International Public Policy from LSE.

Imogen Stead

Senior Analyst

Imogen works across UK Politics. She has previously worked for two Civil Service departments covering policy and stakeholder engagement, and has experience in Marketing & External Affairs from her time at Oxford University Press and a members’ club in Brussels. She holds a first class BA and an MPhil in Classics from the University of Oxford, and is in the process of obtaining a DPhil in the subject. 

Michele Grassi

Analyst

Michele works on EU digital assets policy and Italian politics. He previously interned as MEP assistant at the European Parliament and as a Policy analyst at the Lombardy Regional Council. Michele holds a double degree MSc in European Public Policy from LSE and Bocconi University.

Pascal LeTendre-Hanns

Director

Pascale leads on Forefront’s Energy & Net Zero and Sustainability insights. Pascal previously worked in the Paris-based pro-European think tank, EuropaNova. He is leading sustainability policy coverage and following political developments in France and Spain. He graduated from UCL with a First Class Honours degree in European Social and Political Studies, specialising in international relations and French, which included a year at Sciences Po Paris. 

Charles d’Arcy-Irvine

Director

Charles works on political and policy insight advising businesses across different industries. He previously worked in investment banking at Goldman Sachs and Deutsche Bank, as an official at HM Treasury, as a political adviser to George Osborne, and in real estate. He holds a master’s degree in public administration from the John F. Kennedy School of Government at Harvard University and is a Trustee of the Epping Forest Schools Partnership Trust. 

Christopher Glück

Managing Director

Chris leads Forefront Advisers’ EU political analysis and insights team. He previously led Hanbury’s EU Public Affairs work. Previously, Chris worked on EU financial services policy in HM Treasury and as policy advisor for Jakob von Weizsäcker in the European Parliament. Chris holds a master’s degree from the College of Europe and read history at the University of Munich. 

Dea Markova

Managing Director

Dea leads Forefront Advisers’ crypto political and regulatory analysis team. She joined Forefront having grown and led the crypto client portfolio for FTI Consulting Brussels. Dea relocated back to Europe after her post in the Monetary Authority of Singapore, where she developed innovation programmes and infrastructure in the MAS Financial Technology and Innovation Group. She was also Head of Programmes for Innovate Finance, the UK FinTech trade association. Dea holds MSc Financial Regulation from the London School of Economics.

Gergely Polner

Managing Director

Gergely heads Forefront Advisers’ EU team. He was previously Head of EU Affairs at Standard Chartered Bank and at the British Bankers Association. Before his City career, he spent eight years at the EU institutions, including as a spokesperson for the EU Council Presidency and Head of UK Public Affairs for the European Parliament. While at the EU Institutions, Gergely worked on the EU’s sanctions regime and the regulatory reform in financial services. He started his career as a lawyer and built a successful legal translation business.

Get in touch

Fill out the form below, and we will be in touch shortly.

James McBride

Managing Director

James leads Forefront’s work on political and policy insight, advising businesses across a range of industriesJames previously worked in the Labour Party’s Policy Unit, where he led on economy and business policy. James worked on the ‘Labour In’ 2016 EU Referendum and 2017 General Election campaigns, as well as the party’s response to Budgets and other fiscal events. Prior to this, James worked in five government departments across Whitehall.