The government’s long-awaited white paper on the regulation of artificial intelligence takes 20 pages to get to the big problem. With mild understatement, it notes that 'there is no general definition of AI that enjoys widespread consensus'. Sensibly, the Department for Science, Innovation & Technology declines to enter that philosophical quagmire. Rather, it decides that the systems that concern us are those that are prone to generate outcomes that are hard to explain by logic and which may depart radically from the intentions of the human designer or controller.
Incidentally, these attributes of 'adaptivity' and 'autonomy' can sometimes appear spooky, but there's no ghost in the machine. They are an inevitable consequence of the way that machine-learning systems are designed. Unlike the first two, failed, generations of so-called AI, we do not program fixed rules, but rather create an algorithmic template which is automatically populated by data from the real - or, more usually, the virtual - world. Hence the hallucinogenic flights of fancy that pop up in the outputs of ChatGPT and the like.
So the government is not setting out to regulate AI as such, but rather its use. This clearly makes sense. To take an example from the legal world, an AI system that scrapes archives of court judgments to help legal researchers predict future outcomes should obviously be treated very differently from an identical system whose outputs are peddled to the public as 'legal advice'.
But are not such uses not already knee-deep in regulation? The white paper concedes that this is the case - indeed it raises the concern that innovation is being held back by a patchwork of regulatory regimes. Mercifully, its solution is not to create a new super-regulator but to 'empower' existing regulators - it cites the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority - to come up with 'tailored, context-specific approaches that suit the way AI is actually being used in their sectors'.
These approaches should follow five clear principles:
- Safety: 'applications of AI should function in a secure, safe and robust way where risks are carefully managed'
- Transparency: 'organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed'
- Fairness: 'AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes'
- Accountability and governance: regulators should ensure 'appropriate oversight of the way AI is being used and clear accountability for the outcomes'
- Contestability and redress: 'people need to have clear routes to dispute harmful outcomes or decisions generated by AI'
These principles will not be put on a statutory footing initially, the paper states, but following a period of implementation 'and when parliamentary time allows, we anticipate introducing a statutory duty on regulators requiring them to have due regard to the principles'. Such a duty would 'give a clear signal that we expect regulators to act and support coherence across the regulatory landscape'.
We shall see how this aspiration, which the government describes as a 'deliberately agile and iterative approach' survives its encounter with both parliamentary and regulatory tinkering.
In the meantime, the paper comes up with a more practical solution: it endorses Sir Patrick Vallance's call earlier this month for the creation of a multiregulator sandbox for testing new AI ideas to be in operation within the next six months. The legal regulators should be brought on board. If we really want the UK to be an AI superpower we should be more concerned with getting innovations out in the market than with regulation or definition.
1 Reader's comment