Key takeaways
Here is our outlook for critical technologies in 2024, which we put towards a conclusion on whether AI has reached peak hype:
- Regulatory and commercial implementation in the EU will curb AI enthusiasm.
- Political support of national champions will be unapologetic.
- Copyright litigation will dominate case law.
- The UK will fast-track competition concerns ahead of the EU.
- Debate lines on open source will deepen.
- New EU export and outbound investment control regimes will focus on critical technologies.
- The defence community will prioritise AI catch-up.
- Election campaigns will be different and will put AI self-regulation to the test.
- The Brussels effect and international governance efforts will fall victim to Balkanisation.
- The UK’s Labour party will sharpen its AI policy narrative.
1) Regulatory and commercial implementation in the EU will curb AI enthusiasm
The language of the EU AI Act will be finalised in January, and the legislation will enter into force in H1 2024. Despite the political agreement, last-minute seemingly legalistic changes can make a difference, given the precarious nature of the consensus.
More importantly, big items on scope, standardisation and enforcement are left to the European Commission. The speed and cost of the implementation of the EU AI Act will heavily depend on how those are rolled out in 2024. Whether the AI Act can boost EU competitiveness will become much more tangible, including as a part of Mario Draghi’s report.
The most complex determination to be made is who in an AI value chain is to be held responsible if something goes wrong. When looked at carefully, the EU AI Act leaves many ambiguities around this, which will be untangled in this year’s partnerships between customer-facing service providers and technology vendors.
Having the law on the table will incentivise corporates to pursue AI strategies. Many will start with a consideration of generative AI but will narrow down initial 2024 implementation ambitions to single-purpose AI. 2024 will be a race between corporate rollouts exposing technology shortcomings and technology innovation addressing these and drawing new horizons.
2) Political support of national champions will be unapologetic
A lesson on the minds of political leaders is that technological leaps create critical dependencies on technology providers, and so far, the USA has been the biggest winner. We can expect various shades of protectionism for domestic hopefuls.
We often referred to such efforts from French, German, and Italian policymakers in 2023, but the trend is prevalent much wider. For instance, Britain, France, Germany, India, Saudi Arabia, and the United Arab Emirates combined pledged USD 40 billion to effectively support domestic AI.
Protectionism can come in several forms. The bulk of the promised investment is to address the gluttony to GPUs and computers that AI has. There will be smaller support for AI start-ups themselves, as well as facilitated access to public quantum computing capabilities.
Finally, there is legislative change. In the EU, this already trickled into the AI Act, but in 2024 it is yet to affect its critical implementation choices.
3) Copyright litigation will dominate case law
The biggest AI policy battles of 2024 are likely to be over copyright. With the advent of generative AI, 2023 saw a big uptick in active IP and copyright cases brought against big tech firms.
Here, common law systems like the UK and the US can set up precedents. In the EU, on the other hand, the Copyright Directive of 2019 will be put to the test for its applicability to generative AI cases as well as its consistent interpretation across Member States. This could lead to more pressure on copyright law reform for AI-specific usage.
At a high level, key questions in a copyright case are whether the output was generated through access to copyrighted materials, with knowledge of the material it resembles closely, who is entitled to commercialise and copyright computer-generated content, and what constitutes fair use. These questions are hugely complex, for different reasons, in the context of generative AI
4) The UK will fast-track competition concerns ahead of the EU
Competition concerns across the AI value chain will only increase as AI investment continues to rise, and the bigger players surge ahead with next-generation and more powerful models. Of particular interest for UK regulators will be vertical integration agreements or acquisitions by big tech players along the AI value chain.
The UK CMA has taken the lack of an AI legislation debate as a signal to act singularly and decisively. The CMA’s initial review of the foundation model market identified vertical integration as a particular competition issue for AI innovation and a topic it intends to focus on going forward, using new powers that will be granted to it by the new Digital Markets, Competition and Consumers Bill, likely to come into force in H1 2024.
The EU’s new competition framework under the Digital Markets Act does not extend to AI, and to the frustration of France, Germany and the European Parliament, no cloud providers were designated as gatekeepers. This may motivate member states to act unilaterally, as well as be a feature in the EU electoral rhetoric, but we don’t expect a more decisive action from Brussels until after the elections
5) Debate lines on open source will deepen
Whether and how to regulate open-source AI systems (and defining the scope of ‘open-source’ technology in AI) is likely to rise up the policy agenda in 2024.
At the eleventh hour, open-source general-purpose AI was partially exempt from the generative AI regulation in the EU’s AI Act, reflecting Member State concern over protecting Europe’s industry champions relying on this approach.
The EU is not necessarily all positive on open-source. Commissioner Vestager, in particular, has been a sceptic in past open-source discussions looking for holes in such models that mask anti-competitive behaviours or taking responsibility for governance.
Some scientific research also supports a view that in generative AI, open-ness alone does not meaningful competition in AI, nor solve the problem of oversight.”
The national security risk narrative being prioritised by the US could also lead the policy focus to tip against open-source AI. As part of the Executive Order, the National Telecommunications and Information Administration (NTIA) has been tasked with studying the open-source issue and recommending actions to the White House by July 2024. How this narrative develops could see a big splinter between the US and Europe (France) on AI policy.
2024 will see both an entrenchment of views on this debate and a proliferation of mid-ground models. For instance, open-source models trained on private data might offer some of the benefits and address some of the concerns with this governance form.
6) EU export and outbound investment control regimes to focus on critical technologies
On 24 January, The European Commission will unveil its economic security proposals with five elements: the revision of the FDI Screening Regulation based on lessons learned from recent cases; reforming the European export control regime; an initiative on outbound investment screening; long-term targeted support for dual-use technologies; and increasing research security. (See: Key dates for Europe in 2024).
Meanwhile, the EU will formally designate AI, semiconductors, biotech, and quantum and critical technologies under its economic security framework.
The Commission wants to create an EU export control regime to replace individual Member State initiatives. This would bring, for instance, any export controls instituted by the Dutch semiconductor manufacturer under the Commission’s purview rather than that of the Netherlands. Any move to do this will create divisions with the key Member States affected.
The EU is also looking to institute an outbound investment control regime to match that of the US. However, there will be key differences between the two set-ups: the EU’s is likely to represent more of an information-sharing exercise than the US’s, given that national security concerns (a key aspect of the US regime) are outside the Commission’s control.
Additionally, the US’s outbound investment regime is extraterritorial in its application, which is problematic from an EU perspective because it means that any relevant US company with a European offshoot could be prevented from doing business with another European company if it happens to have a Chinese investor.
There is also likely to be a greater focus on protecting information and R&D from China in key areas such as pharmaceuticals and biotechnologies.
7) Defence community will prioritise AI catch-up
It is sometimes overlooked that some of the largest access to computing power sits with defence agencies. The use of this resource towards training and deploying generative AI will be prioritised. We are looking at a year or two of the defence sector playing catch up to private sector rollouts.
The Pentagon and NATO have already been looking into using AI in defence under the prism of principles for responsible AI. As technology advances and other nations ramp up their production and adoption capabilities, the appetite for weapon automation increases.
In 2023, the Pentagon revisited the fine line between imposing ethical limits on the use of AI and exploiting the technology’s war-winning potential to the fullest under the Responsible AI Strategy & Implementation Pathway, an implementation plan building on past principles. For example, other nations – the UK – are prioritising the same internal debates.
The international defence community is formally concerned. By its autumn 2024 General Assembly meeting, the UN will conclude a first-of-its-kind study of autonomous weapons.
Additionally, NATO has positioned itself not only as focused on the ethical use of AI in weapons but also on securing an AI development headway for NATO members so they have the economic power to promote ethics in the first place.
8) Election campaigns will be different and put AI self-regulation to the test
It is already a political cliché that 2024 is the year of elections, with four billion people worldwide set to go to the polls. At the same time, no Western democracy will have implemented binding regulations on narrow or generative AI. This means that internal and voluntary codes of conduct are the main governance framework large language models have to protect the democratic process from abuse.
Aware of this dynamic, both providers and governments will be relying on soft power and public-private sector relationships to avoid mistakes from past elections.
The many deepfake examples of 2023 point to the potential use of AI for disinformation campaigns. But the technology can also assist with hyper-tailoring and hyper-automation of campaign content. If used well, it could tip the scales towards candidates who otherwise would not have had the resources to advance.
In key markets – EU, US, UK, Taiwan, India – electoral change will increase regulatory uncertainty in critical technologies.
9) The Brussels effect and international governance efforts to fall victim to Balkanisation
For different reasons, we don’t expect to see promising strides in the convergence of AI regulatory approaches either driven by the fabled Brussels effect or by the international governance processes in place.
We are sceptical of the Brussels effect in 2024 because the bulk of the EU regulatory focus – on safeguarding the use of AI in high-risk cases – is not a priority for the US or the UK, which are more concerned with security and catastrophic risks, respectively. The EU’s high-level framework for generative AI will come closer to principles established elsewhere.
The international governance fora – the G7 Hiroshima Process, the UK AI Safety Summit, and the UN AI Advisory Body – are set up as competitors as much as collaborators and cater to a Venn diagram of participants.
Of these, the UN AI Advisory Body may eventually emerge as a driver of cross-border coordination to address the scarcity of data, computing, or scientific collaboration globally. Moving convincingly towards these aspirations in the competitive context of 2024 seems unlikely.
10) The UK’s Labour party will sharpen its AI policy narrative
Labour HQ knows it will need a solid policy offering on AI to take into the general election, whether proactively, as part of Starmer’s wider narrative of an economic reset, or defensively, if Sunak pushes his AI optimism up the political agenda or if AI itself forces itself into the campaign through deepfakes or misinformation.
The Conservatives’ decision not to introduce new dedicated AI legislation and to focus central regulation efforts almost entirely on existential AI threats has left Labour with an obvious policy gap to fill.
Some in Labour are already of the view that the focus of the Government’s AI Safety Summit on frontier AI systems was a missed opportunity to tackle the everyday harms that current and emerging generative AI systems can pose here and now. They will likely argue in favour of a more EU-aligned, consumer protection-centric approach, which would see a combination of new legislation introduced and existing legislation (e.g. the Online Safety Bill) updated to encompass the risks that these systems pose.
Labour will also need to balance its “tougher regulation” stance with a convincing pro-innovation, pro-industry tech narrative. The Regulatory Innovation Office (RIO), announced in October last year, is the beginning of this process. Labour’s positive AI policy offer is likely to focus on harnessing AI’s potential to drive reforms, improvements and efficiencies in public services.
Conclusion
2024 forecasts point to the generationally transformative power of AI. The commercial and policy trajectory of other critical technologies – semiconductors, quantum computing, and biotech – is intertwined.
The question at the year’s onset is how close is generative AI to peak hype? In the evolution of new technologies (by the Gartner methodology), peak hype is followed by a period of disillusionment before mass adoption ensues. In investment terms, this translates to more measured valuations.
The 2024 political, trade, and policy outlook is hugely susceptible to the technology hype cycle. In the EU, in particular, two competing drivers will emerge. On the one hand, the period of implementation – of regulatory as well as commercial change – will substitute the 2023 hype with an onslaught of practical complexities. Conversely, a drive towards creating national champions will maintain the hype narrative and influence trade policy.



