The EU is adopting a prescriptive approach to policing artificial intelligence, aiming to ‘set the tone worldwide’. But lawyers point to the downsides of a detailed legal framework
Lawyers and others with opinions about the suddenly burning issue of so-called artificial intelligence have just three working days to contribute to a proposed legal regime for the sector. A government consultation on the AI regulation white paper published in March this year closes next Wednesday. The outcome will form part of prime minister Rishi Sunak’s ambition for the UK to become the global hub for AI regulation.
The UK is not alone in having such an ambition. In an important vote this week, the European Parliament approved a ‘negotiating position’ on the EU’s upcoming Artificial Intelligence (AI) Act ahead of talks with member states on the final shape of the law. The aim, according to the parliament, is to ‘ensure that AI developed and used in Europe is fully in line with EU rights and values’.
To this end, the parliament’s regime would significantly ratchet up the European Commission’s initial legislative proposals. It would ban ‘intrusive and discriminatory uses of AI’, such as biometric identification systems in public spaces and predictive policing systems. Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases would also be unlawful.
The proposed overall regime would work very much along the lines of the general data protection regulation (GDPR). For example, providers of large machine-learning models – so-called ‘foundation models’ – would have to assess and mitigate possible risks and register in an EU database. Generative AI systems based on such models, such as ChatGPT, would have to comply with transparency requirements and ensure safeguards against generating illegal content. Detailed summaries of the copyrighted data used for their training would also have to be made publicly available.
An EU AI Office would be tasked with monitoring how the AI rulebook is implemented.
According to Romanian MEP Dragos Tudorache, EU legislation ‘will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law’.
However the EU’s way of achieving this aim has been widely criticised by experts in the UK. Ellen Keenan-O’Malley, senior associate at IP specialist firm EIP, noted the tendency to pack legislation with new measures. ‘There is global consensus AI needs to be regulated in some form. However, between every parliamentary vote a new AI concern or technological advance leads to calls for the EU AI Act needing to be amended in some way.’ Such a detailed legal framework, she suggested, ‘may be out of date or require amending by the time it is published – let alone comes into force’.
Sarah Pearce, partner at US firm Hunton Andrews Kurth, voiced similar concerns. ‘I think we would be better focusing on the outputs and uses of the technology rather than trying to settle on an overly broad definition of what it is which will likely be outdated by the time the legislation comes into force.’
Tim Wright, tech and AI regulation partner at City firm Fladgate, noted that non-compliance with the AI Act will ‘come at significant cost’ – penalties for non-compliance will attract GDPR-type penalties of up to the higher of €20m or 4% of global turnover.
The UK government will be keen to contrast the EU’s plans with its own ‘pro-innovation approach’. Speaking during London Tech Week, science and innovation secretary Chloe Smith MP said the government wants to create a regulatory environment which fosters innovation and growth. Instead of targeting specific technologies – such as ChatGPT –‘it focuses on the context in which AI is deployed and enables us to take a balanced approach’, she said.
For example, using an AI chatbot to summarise a long article presents very different risks to using the same technology to provide medical advice, she said. ‘The rules governing one will be markedly different to the other. And this flexibility runs throughout our white paper with a commitment to work in close partnership with regulators and business on sensible, pragmatic rules.’
In this, the minister seems to have an ally in the master of the rolls. Introducing the latest edition of regulatory and legal guidance covering another (and potentially overlapping) controversial technology –blockchain – Sir Geoffrey Vos warned of the dangers of over-hasty regulation. Regulators should wait until they understand the complexities, he said: ‘You don’t regulate just because you’re frightened of what might happen in the future.’
This article is now closed for comment.
5 Readers' comments