Critical technologies: our 2024 outlook

Our outlook for critical technologies in 2024 identifies 10 political and policy dynamics in the EU and the UK that will impact artificial intelligence, semiconductors, quantum computing and biotech.

Here is our outlook for critical technologies in 2024, which we put towards a conclusion on whether AI has reached peak hype:

  1. Regulatory and commercial implementation in the EU will curb AI enthusiasm.
  2. Political support of national champions will be unapologetic.
  3. Copyright litigation will dominate case law.
  4. The UK will fast-track competition concerns ahead of the EU.
  5. Debate lines on open source will deepen.
  6. New EU export and outbound investment control regimes will focus on critical technologies.
  7. The defence community will prioritise AI catch-up.
  8. Election campaigns will be different and will put AI self-regulation to the test.
  9. The Brussels effect and international governance efforts will fall victim to Balkanisation.
  10. The UK’s Labour party will sharpen its AI policy narrative.

The language of the EU AI Act will be finalised in January, and the legislation will enter into force in H1 2024. Despite the political agreement, last-minute seemingly legalistic changes can make a difference, given the precarious nature of the consensus.

More importantly, big items on scope, standardisation and enforcement are left to the European Commission. The speed and cost of the implementation of the EU AI Act will heavily depend on how those are rolled out in 2024. Whether the AI Act can boost EU competitiveness will become much more tangible, including as a part of Mario Draghi’s report.

The most complex determination to be made is who in an AI value chain is to be held responsible if something goes wrong. When looked at carefully, the EU AI Act leaves many ambiguities around this, which will be untangled in this year’s partnerships between customer-facing service providers and technology vendors.

Having the law on the table will incentivise corporates to pursue AI strategies. Many will start with a consideration of generative AI but will narrow down initial 2024 implementation ambitions to single-purpose AI. 2024 will be a race between corporate rollouts exposing technology shortcomings and technology innovation addressing these and drawing new horizons.

A lesson on the minds of political leaders is that technological leaps create critical dependencies on technology providers, and so far, the USA has been the biggest winner. We can expect various shades of protectionism for domestic hopefuls.

We often referred to such efforts from French, German, and Italian policymakers in 2023, but the trend is prevalent much wider. For instance, Britain, France, Germany, India, Saudi Arabia, and the United Arab Emirates combined pledged USD 40 billion to effectively support domestic AI.

Protectionism can come in several forms. The bulk of the promised investment is to address the gluttony to GPUs and computers that AI has. There will be smaller support for AI start-ups themselves, as well as facilitated access to public quantum computing capabilities.

Finally, there is legislative change. In the EU, this already trickled into the AI Act, but in 2024 it is yet to affect its critical implementation choices.

The biggest AI policy battles of 2024 are likely to be over copyright. With the advent of generative AI, 2023 saw a big uptick in active IP and copyright cases brought against big tech firms.

Here, common law systems like the UK and the US can set up precedents. In the EU, on the other hand, the Copyright Directive of 2019 will be put to the test for its applicability to generative AI cases as well as its consistent interpretation across Member States. This could lead to more pressure on copyright law reform for AI-specific usage.

At a high level, key questions in a copyright case are whether the output was generated through access to copyrighted materials, with knowledge of the material it resembles closely, who is entitled to commercialise and copyright computer-generated content, and what constitutes fair use. These questions are hugely complex, for different reasons, in the context of generative AI

Competition concerns across the AI value chain will only increase as AI investment continues to rise, and the bigger players surge ahead with next-generation and more powerful models. Of particular interest for UK regulators will be vertical integration agreements or acquisitions by big tech players along the AI value chain.

The UK CMA has taken the lack of an AI legislation debate as a signal to act singularly and decisively. The CMA’s initial review of the foundation model market identified vertical integration as a particular competition issue for AI innovation and a topic it intends to focus on going forward, using new powers that will be granted to it by the new Digital Markets, Competition and Consumers Bill, likely to come into force in H1 2024.

The EU’s new competition framework under the Digital Markets Act does not extend to AI, and to the frustration of France, Germany and the European Parliament, no cloud providers were designated as gatekeepers. This may motivate member states to act unilaterally, as well as be a feature in the EU electoral rhetoric, but we don’t expect a more decisive action from Brussels until after the elections

Whether and how to regulate open-source AI systems (and defining the scope of ‘open-source’ technology in AI) is likely to rise up the policy agenda in 2024.

At the eleventh hour, open-source general-purpose AI was partially exempt from the generative AI regulation in the EU’s AI Act, reflecting Member State concern over protecting Europe’s industry champions relying on this approach.

The EU is not necessarily all positive on open-source. Commissioner Vestager, in particular, has been a sceptic in past open-source discussions looking for holes in such models that mask anti-competitive behaviours or taking responsibility for governance.
Some scientific research also supports a view that in generative AI, open-ness alone does not meaningful competition in AI, nor solve the problem of oversight.”

The national security risk narrative being prioritised by the US could also lead the policy focus to tip against open-source AI. As part of the Executive Order, the National Telecommunications and Information Administration (NTIA) has been tasked with studying the open-source issue and recommending actions to the White House by July 2024. How this narrative develops could see a big splinter between the US and Europe (France) on AI policy.

2024 will see both an entrenchment of views on this debate and a proliferation of mid-ground models. For instance, open-source models trained on private data might offer some of the benefits and address some of the concerns with this governance form.

On 24 January, The European Commission will unveil its economic security proposals with five elements: the revision of the FDI Screening Regulation based on lessons learned from recent cases; reforming the European export control regime; an initiative on outbound investment screening; long-term targeted support for dual-use technologies; and increasing research security. (See: Key dates for Europe in 2024).

Meanwhile, the EU will formally designate AI, semiconductors, biotech, and quantum and critical technologies under its economic security framework.

The Commission wants to create an EU export control regime to replace individual Member State initiatives. This would bring, for instance, any export controls instituted by the Dutch semiconductor manufacturer under the Commission’s purview rather than that of the Netherlands. Any move to do this will create divisions with the key Member States affected.

The EU is also looking to institute an outbound investment control regime to match that of the US. However, there will be key differences between the two set-ups: the EU’s is likely to represent more of an information-sharing exercise than the US’s, given that national security concerns (a key aspect of the US regime) are outside the Commission’s control.

Additionally, the US’s outbound investment regime is extraterritorial in its application, which is problematic from an EU perspective because it means that any relevant US company with a European offshoot could be prevented from doing business with another European company if it happens to have a Chinese investor.

There is also likely to be a greater focus on protecting information and R&D from China in key areas such as pharmaceuticals and biotechnologies.

It is sometimes overlooked that some of the largest access to computing power sits with defence agencies. The use of this resource towards training and deploying generative AI will be prioritised. We are looking at a year or two of the defence sector playing catch up to private sector rollouts.

The Pentagon and NATO have already been looking into using AI in defence under the prism of principles for responsible AI. As technology advances and other nations ramp up their production and adoption capabilities, the appetite for weapon automation increases.

In 2023, the Pentagon revisited the fine line between imposing ethical limits on the use of AI and exploiting the technology’s war-winning potential to the fullest under the Responsible AI Strategy & Implementation Pathway, an implementation plan building on past principles. For example, other nations – the UK – are prioritising the same internal debates.

The international defence community is formally concerned. By its autumn 2024 General Assembly meeting, the UN will conclude a first-of-its-kind study of autonomous weapons.

Additionally, NATO has positioned itself not only as focused on the ethical use of AI in weapons but also on securing an AI development headway for NATO members so they have the economic power to promote ethics in the first place.

It is already a political cliché that 2024 is the year of elections, with four billion people worldwide set to go to the polls. At the same time, no Western democracy will have implemented binding regulations on narrow or generative AI. This means that internal and voluntary codes of conduct are the main governance framework large language models have to protect the democratic process from abuse.

Aware of this dynamic, both providers and governments will be relying on soft power and public-private sector relationships to avoid mistakes from past elections.

The many deepfake examples of 2023 point to the potential use of AI for disinformation campaigns. But the technology can also assist with hyper-tailoring and hyper-automation of campaign content. If used well, it could tip the scales towards candidates who otherwise would not have had the resources to advance.

In key markets – EU, US, UK, Taiwan, India – electoral change will increase regulatory uncertainty in critical technologies.

For different reasons, we don’t expect to see promising strides in the convergence of AI regulatory approaches either driven by the fabled Brussels effect or by the international governance processes in place.

We are sceptical of the Brussels effect in 2024 because the bulk of the EU regulatory focus – on safeguarding the use of AI in high-risk cases – is not a priority for the US or the UK, which are more concerned with security and catastrophic risks, respectively. The EU’s high-level framework for generative AI will come closer to principles established elsewhere.

The international governance fora – the G7 Hiroshima Process, the UK AI Safety Summit, and the UN AI Advisory Body – are set up as competitors as much as collaborators and cater to a Venn diagram of participants.

Of these, the UN AI Advisory Body may eventually emerge as a driver of cross-border coordination to address the scarcity of data, computing, or scientific collaboration globally. Moving convincingly towards these aspirations in the competitive context of 2024 seems unlikely.

Labour HQ knows it will need a solid policy offering on AI to take into the general election, whether proactively, as part of Starmer’s wider narrative of an economic reset, or defensively, if Sunak pushes his AI optimism up the political agenda or if AI itself forces itself into the campaign through deepfakes or misinformation.

The Conservatives’ decision not to introduce new dedicated AI legislation and to focus central regulation efforts almost entirely on existential AI threats has left Labour with an obvious policy gap to fill.

Some in Labour are already of the view that the focus of the Government’s AI Safety Summit on frontier AI systems was a missed opportunity to tackle the everyday harms that current and emerging generative AI systems can pose here and now. They will likely argue in favour of a more EU-aligned, consumer protection-centric approach, which would see a combination of new legislation introduced and existing legislation (e.g. the Online Safety Bill) updated to encompass the risks that these systems pose.

Labour will also need to balance its “tougher regulation” stance with a convincing pro-innovation, pro-industry tech narrative. The Regulatory Innovation Office (RIO), announced in October last year, is the beginning of this process. Labour’s positive AI policy offer is likely to focus on harnessing AI’s potential to drive reforms, improvements and efficiencies in public services.

2024 forecasts point to the generationally transformative power of AI. The commercial and policy trajectory of other critical technologies – semiconductors, quantum computing, and biotech – is intertwined.

The question at the year’s onset is how close is generative AI to peak hype? In the evolution of new technologies (by the Gartner methodology), peak hype is followed by a period of disillusionment before mass adoption ensues. In investment terms, this translates to more measured valuations.

The 2024 political, trade, and policy outlook is hugely susceptible to the technology hype cycle. In the EU, in particular, two competing drivers will emerge. On the one hand, the period of implementation – of regulatory as well as commercial change – will substitute the 2023 hype with an onslaught of practical complexities. Conversely, a drive towards creating national champions will maintain the hype narrative and influence trade policy.

More Posts

Get in touch

Fill out the form below, and we will be in touch shortly.

Forefront Advisers Limited, 20 St Thomas St, London SE1 9RS, registered in England and Wales, no. 13248974

Scroll to Top

Discover more from Forefront

Subscribe now to keep reading and get access to the full archive.

Continue reading

Joseph Steward

Director

Joseph works across Forefront’s digital assets and UK political teams. He joined Forefront from the FCA, where he worked on developing the UK’s crypto policy, with a particular focus on stablecoins. During his time at the FCA he also covered UK strategy and engagement in the Asia-Pacific region and was seconded to HM Treasury ahead of the 2024 general election to support the government transition and cover US and Canada financial services policy. He holds a degree in politics from University College London, which included a year at the Higher School of Economics in Saint Petersburg.

Jessica Hazel

Senior Analyst

Jess works on coverage of Energy and Sustainability policy, typically focusing on activities in the UK market.

She previously worked as a Hydrogen Policy Official for the Scottish Government, covering a range of different policy areas in her time there. 

Jess completed her MSc in Environment and Development at the University of Edinburgh.

Manon Quénel

Associate Director

Manon works on the coverage of the EU sustainability policy focusing on the sustainable finance agenda and corporate accountability rules.

Before joining Forefront, Manon worked for a Brussels-based public affairs consultancy where she was supporting corporate clients navigate the EU political and regulatory landscape, focusing on the Green Deal and the financial services’ agenda. Previously, she worked in the policy department of the French Economic and Social Council in Paris and interned in the European Parliament in Brussels.

Manon holds a dual degree of master’s in public administration from SciencesPo Strabourg and York University and a specialized master in EU studies from Universite Libre de Bruxelles.

James Nation

Managing Director

James is a Managing Director of UK Politics. He previously worked as the Deputy Head of the Number 10 Policy Unit from 2022 until May 2024 and before that was a Special Adviser to the Chancellor of the Exchequer. Recently, he led the team responsible for the Conservative Party Manifesto in the 2024 General Election campaign. Earlier in his career, James worked as a civil servant in MHCLG and the Treasury, following on from a role in tax and fiscal policy at the CBI. 

Ksenia Duxfield-Karyakina

Managing Director

Ksenia is the Managing Director of Emerging Technology at Forefront. She has spent most of her career in technology policy, working across the UK, Europe, Asia-Pacific, and Emerging Markets. Her expertise spans AI, data governance, cloud, content, and fintech policy areas.

Before joining Forefront, Ksenia led public policy and regulatory affairs for Google Cloud in Europe, and was responsible for YouTube Policy in APAC and Eurasia, based out of Hong Kong. Prior to entering the big tech industry, Ksenia worked in financial services, focusing on anti-fraud and policies addressing financial crime within the OECD ecosystem. She is a journalist by training and holds a PhD in new media economics. Ksenia is a parent to two daughters, an art lover, and an avid reader (of paper books).

Dustin Benton

Managing Director

Dustin is the MD of sustainability at Forefront. Previously, he was policy director at Green Alliance, leading its work across energy, resources, and the natural environment. He previously worked at Defra, where he was chief analytical advisor to Henry Dimbleby’s National Food Strategy and led the department’s analysis of food vulnerability. Earlier in his career, Dustin led on climate and renewables at the Campaign to Protect Rural England. He holds an MA in Political Thought and Theory from the University of Birmingham and an MA in International Relations and French from the University of St Andrews.

Matteo Vittori

Director

Matteo leads the EU digital and emerging tech policy coverage, advising clients across different industries. Before joining Forefront, Matteo worked for a Brussels-based public affairs consultancy where he helped corporate clients navigate the political and regulatory aspects of the EU’s digital agenda, focusing on artificial intelligence and data in particular. Previously, he worked at the European Parliament as part of a team dealing with interinstitutional relations, before starting his public affairs path with an agency in Brussels, covering ICT and trade policy. Matteo holds an MA in International Relations from LUISS Guido Carli University in Rome and a Master of Research in Public Policy from Queen Mary, University of London.

Joseph Steward

Director

Joseph works across Forefront’s digital assets and UK political teams. He joined Forefront from the FCA, where he worked on developing the UK’s crypto policy, with a particular focus on stablecoins. During his time at the FCA he also covered UK strategy and engagement in the Asia-Pacific region and was seconded to HM Treasury ahead of the 2024 general election to support the government transition and cover US and Canada financial services policy. He holds a degree in politics from University College London, which included a year at the Higher School of Economics in Saint Petersburg.

Ramona Visenescu

Associate Director

Ramona is an Associate Director focusing on sustainable finance and circular economy. Ramona previously worked in Brussels at Teneo, where she also covered ESG legislative priorities and interned at the European Commission in DG Economy and Finance. She earned her Bachelor degree in International Relations and European Studies from the University of Bucharest and completed an Advanced Master in Financial Markets at the Solvay Brussels School of Economics & Management.

Pietro Candia

Analyst

Pietro works across EU Politics and is based in London. Before joining Forefront, he interned in other political risk advisory firms and worked in the government relations division of a major oil corporation. He holds a Bachelor’s degree in International Politics from Georgetown University and a Master’s degree in European and International Public Policy from LSE.

Imogen Stead

Senior Analyst

Imogen works across the Emerging Tech service, covering EU, UK and multilateral policy and regulatory developments in AI and critical technologies. She previously worked on Forefront’s UK Politics note, with a focus on post-Brexit trading relations and foreign policy, and has prior experience of policy and stakeholder management in two UK Civil Service departments. She holds a BA, MPhil and DPhil in Classics from the University of Oxford. 

Michele Grassi

Analyst

Michele works on EU digital assets policy and Italian politics. He previously interned as MEP assistant at the European Parliament and as a Policy analyst at the Lombardy Regional Council. Michele holds a double degree MSc in European Public Policy from LSE and Bocconi University.

Pascal LeTendre-Hanns

Director

Pascal leads on Forefront’s Energy & Net Zero and Sustainability insights. Pascal previously worked in the Paris-based pro-European think tank, EuropaNova. He is leading sustainability policy coverage and following political developments in France and Spain. He graduated from UCL with a First Class Honours degree in European Social and Political Studies, specialising in international relations and French, which included a year at Sciences Po Paris. 

Charles d’Arcy-Irvine

Director

Charles works on political and policy insight advising businesses across different industries. He previously worked in investment banking at Goldman Sachs and Deutsche Bank, as an official at HM Treasury, as a political adviser to George Osborne, and in real estate. He holds a master’s degree in public administration from the John F. Kennedy School of Government at Harvard University and is a Trustee of the Epping Forest Schools Partnership Trust. 

Christopher Glück

Managing Director

Chris leads Forefront’s EU political analysis and insights team. He previously led Hanbury’s EU Public Affairs work. Previously, Chris worked on EU financial services policy in HM Treasury and as policy advisor for Jakob von Weizsäcker in the European Parliament. Chris holds a master’s degree from the College of Europe and read history at the University of Munich. 

Matt Gravelle

Managing Director

Matt leads on the Financial Services and Digital Assets team.  He joined in late 2024 from Kraken, a leading cryptoasset exchange, where he was Head of Policy and Government Relations for the UK and APAC.

Matt has worked in financial services policy since moving to London in 2013. Before his time at Kraken, he spent more than 5 years as a Director in Standard Chartered’s regulatory affairs team, where he focused on crypto and broader markets regulation across the bank’s global footprint. He previously held policy roles at Deutsche Bank and CME Group.  

Matt is originally from Ottawa, Canada, where he worked for the Canadian government and a policy think tank before moving to the UK. Matt studied at Queen’s University (BA) and McMaster (MA) in Ontario and the University of British Columbia (PhD) in Vancouver.

Gergely Polner

Managing Director

Gergely heads Forefront’s EU team. He was previously Head of EU Affairs at Standard Chartered Bank and at the British Bankers Association. Before his City career, he spent eight years at the EU institutions, including as a spokesperson for the EU Council Presidency and Head of UK Public Affairs for the European Parliament. While at the EU Institutions, Gergely worked on the EU’s sanctions regime and the regulatory reform in financial services. He started his career as a lawyer and built a successful legal translation business.

Get in touch

Fill out the form below, and we will be in touch shortly.

James McBride

Managing Director

James leads Forefront’s work on political and policy insight, advising businesses across a range of industries. James previously worked in the Labour Party’s Policy Unit, where he led on economy and business policy. James worked on the ‘Labour In’ 2016 EU Referendum and 2017 General Election campaigns, as well as the party’s response to Budgets and other fiscal events. Prior to this, James worked in five government departments across Whitehall.