Legal controls over development and use of artificial intelligence hit an obstacle last week, as the US and UK refused to back a statement in support of AI regulation signed by 60 other countries

US vice-president JD Vance warned other governments against regulating US tech firms

US vice-president JD Vance warned other governments against regulating US tech firms

Despite the best efforts of president Macron, the creation of a global regulatory umbrella for artificial intelligence technologies took a backward step this week. An ‘AI Action Summit’ in Paris, attended by delegates from almost 100 countries, culminated in a communiqué entitled ‘Statement on Inclusive and Sustainable Artificial Intelligence’. The statement set out the responsibility of governments ‘to leverage AI’s potential to foster a more equitable and just world’. 

It commits signatories to ‘ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy’, which would require that ‘a strong, inclusive global governance framework for AI must be established – one that is guided by a shared vision of fairness and progress’.

The statement attracted 60 signatories on behalf of national governments – including China’s – as well as the EU and the African Union. Two signatures were conspicuously absent, however. The UK and the US both declined to endorse the statement, apparently for contrary reasons.

For the UK, it seems the statement was too wishy-washy. The UK ‘felt the declaration didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it’, an unnamed government spokesperson told the BBC.

The US declined to set out its reasons. But a flavour of how the Trump administration sees efforts to regulate AI was given in a forthright speech to the summit by vice president JD Vance. Setting out the US government’s wish to avoid an ‘overly precautionary regulatory regime’, Vance warned: ‘The Trump administration is troubled by reports that some foreign governments are considering tightening the screws on US tech companies with international footprints. Now America cannot and will not accept that, and we think it’s a terrible mistake.’

US businesses already know what it is like to deal with ‘onerous international rules’, Vance said, singling out the EU’s Digital Services Act and the ‘massive regulations it created’. Another target was the GDPR, which for smaller businesses ‘means paying endless legal compliance costs or otherwise risking massive fines’.

While the speech was clearly aimed at the EU, its implications will be absorbed by UK government ministers as they ponder the ‘distinctively British approach’ to AI regulation promised by the prime minister last month. One element of this will be a reform of intellectual property laws to clarify the legality or otherwise of trawling copyright material to train large language model AI systems. The author of the government’s AI action plan, technology entrepreneur Matt Clifford, has said that the current legal uncertainty is hindering innovation: ‘This has gone on too long and needs to be urgently resolved.’

A consultation document published quietly just before Christmas agreed on the urgency. ‘The government does not believe that waiting for ongoing legal cases to resolve will provide the certainty that our AI and creative industries need in a timely fashion, or, potentially, at all.’ It proposes ‘direct intervention through legislation to clarify the rules in this area and establish a fair balance in law’.

The consultation document sets out four options – including ‘do nothing’, which has already been rejected. The three remaining options are:

  • Strengthen copyright law to clarify the need for licensing in all cases. This would ‘provide a clear route to remuneration for creators’ but make the UK a less attractive location for AI development.
  • Create a ‘broad data mining exception’ granting general permission to access copyright works without rights-holders’ permissions. The models are Singapore, which expressly permits data mining of copyright material, and the US. (However in a widely watched decision last week the US district court in Delaware ruled in favour of legal information giant Thomson Reuters in a copyright infringement suit brought against an AI business Ross Intelligence.)
  • A data mining exception with a rights reservation mechanism. This would permit text and data mining while allowing rights-holders to opt out.

The consultation document is clear that the last option is the preferred one: it ‘appears to have the potential to meet our objectives of control, access and transparency’.

The consultation, which closes on 25 February, has already polarised opinion. Lord Foster of Bath (Lib Dem former MP Donald Foster), chair of the Lords justice and home affairs committee, has likened the preferred option to a requirement ‘to stick “do not steal” labels on all your worldly goods’. This would ‘undermine the UK’s gold-standard copyright framework which is the bedrock of growth, investment and innovation across the creative industries’.

This week’s AI summit declaration warns that ‘the ongoing revolution in artificial intelligence is unleashing a power of action unprecedented in the history of humanity, which will create immense opportunities, entail risks, and rapidly transform the main economic, political and social balances’. Whether it can be reined in by a global regulatory effort – much less a single nation’s – remains very much an open question.

 

This article is now closed for comment.