After a blaze of sunshine, we are about to enter one of artificial intelligence’s periodic winters. These happen when a much hyped generation of the technology turns out to be not quite as exciting as predicted. During the winter, investment and research funds dry up, startups predicated on the tech go bust and bright young postgrads look elsewhere for career prospects. 

We've had several such winters since the 1960s, when the metaphor of the computer as an electronic brain first fell apart. Mostly they passed unnoticed outside the world of IT companies and academics. The coming winter will be different, largely because the preceding heatwave was so pervasive. We may not have noticed it, but most people reading this piece will have made use of some product or service categorised (misleadingly) as AI over the past couple of years. Generative AI systems based on large language models are a genuine breakthrough. 

Winter's coming, nonetheless - partly as a natural reaction to the hype. Investors are already expressing returns about the prospect of returns on the colossal investment in AI - predicted to reach $1 trillion a year by 2027; a mania likened by some to the dotcom bubble and even Britain's railway mania of the 1840s. Meanwhile, there are signs that the apparently exponential growth in the technology's abilities has limits. See, for example, the interesting paper in Nature suggesting that as the data on which such systems are trained is increasingly itself created by generative AI, the language model will collapse in a feedback loop of inbred gibberish. 

This phenomenon comes on top of the already familiar limits imposed by the exhaustion of sources of training data - not least when it comes to law - and the environmental impact of the vast server farms needed to keep generative AI systems ticking over. 

None of these factors, however, is curbing governments' enthusiasm for legislation in this area. The EU's AI Act, which came into force last month, introduces a risk-based framework for governance, has received a surprisingly warm welcome. 'It creates a more robust legal framework for businesses to work within going forward,' says one expert, David Dumont, partner at international firm Hunton Andrews Kurth.

The measure is even name-checked in the American Bar Association's new report on AI, which notes approvingly that under the act 'developers must take steps to mitigate risks, ensure high-quality datasets, document their systems and have meaningful human oversight'. Whether this governance framework stimulated or hampers development remains to be seen, however. 

Now, after an admittedly slow start, legislators in the US are baring their teeth. Last year's surprise announcement of a federal executive order on the 'safe, secure and trustworthy development and use of artificial intelligence' was followed by action at state level, with at least a dozen jurisdictions drawing up measures: Utah, for example, imposes transparency obligations on the use of generative AI. This month, all eyes are on California, where legislators are considering a 'Safe and Secure Innovation for Frontier Artificial Intelligence Models Act' which would requires developers to conduct rigorous safety testing and, critics say, impose draconian open-ended liabilities on the downstream use of AI products.

Industry groups say the measure would stifle innovation, handing China the lead on AI development. Supporters, including legal scholar Lawrence Lessig, say that as the home of top AI firms, California should take the lead in regulating the technology.  'There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers,' Lessig and three other leading academic experts wrote this week. 

Where does this leave the UK? The last government promised a 'pro innovation' approach to AI regulation, drawing an implicit contrast with that of the EU. So far, Labour seems to share its outlook, with chancellor Rachel Reeves enthusing about AI's 'potential to grow a more productive economy, create good jobs across the country and deliver the excellent public services that people deserve'. The government last month appointed tech entrepreneur Matt Clifford to draw up an 'AI Opportunities Action Plan'. 

As for a governance framework, the government has committed only to 'seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models'. For once, procrastination looks like a good policy. Especially with winter coming.

Topics