Higher insurance premiums are the likely consequence of artificial intelligence as it will be prohibitively difficult to establish liability in the event of damage or personal injury. So believes Brown Rudnick partner Nicholas Tse, leading global litigator, and adviser to governments and multinationals on dispute resolution.
Tse was speaking at an International Bar Association showcase session on the topic of apportioning fault when things go wrong with AI applications. He cited the example of driverless car accidents. ‘The law needs to try not to multiply problems of dealing with AI and should not invest AI with legal personality,’ he told a session moderated by Law Society president Christina Blacklaws. ‘The work of the law is to try and be pragmatic, ensuring accountability while not stifling progress.’
Tse pointed out that UK government expects driverless cars to be on the streets by 2021. The Automated and Electric Vehicles Act 2018 already regulates certain areas of liability through insurance. He sees this as a model for other AI applications – for example in the medical sector.
In terms of criminal law, it is ‘hard to envisage how you can sanction a robot for committing a crime’, said Tse. ‘[You] merely decommission the robot. We are told it’s difficult to predict the behaviour of AIs, so they may do things programmers never intended.’
He added: ‘The idea of trying to have some form of no-fault strict liability is the most pragmatic solution society can offer to victims of AI accidents. The UK act does that by putting the onus on the insurer, [which means] premiums could go up. Proving how AI has gone wrong is very difficult. We need to avoid that debate being had too often or too extensively.’
Tse stressed that the law has yet to get to grips with the ‘full chain of liability’ in AI incorporating the manufacturer and service provider. He described this as an ‘infernal contractual problem that needs to be grappled with’, outlining three proposals.
First, said Tse, lawmakers should consider whether insurance in certain areas where AI is used should be mandatory. They should also consider in tandem legislation for no-fault, strict liability. It should also be made more difficult for AI creators to evade liability in claims, he said.
Tse was followed to the podium by Blacklaws, who chairs a UK public policy commission examining the impact of technology and data on human rights and justice.
Alluding to the many ethical challenges posed by AI, Blacklaws pointed out that ‘there is no international initiative on whether we need soft law or hard law to deal with this matter on a global scale’.
She added: ‘Solicitors and lawyers are starting to use AI systems to determine whether they will take on cases and in which courts. Machine-learning enables profiling of users. You have bespoke suggestions of cases and outcomes. The big rule-of-law issue here is the risk that AI could normalise outcome predictions and that therefore test cases are no longer brought.’
In the longer term, she warned, this could mean that the development of the common law ‘seizes up’.
4 Readers' comments