Lawyers contemplating whether to embrace artificial intelligence are ‘damned if they do and damned if they don’t’. That is how the master of the rolls put it in a recently published speech to the Professional Negligence Bar Association, looking at how AI might affect negligence claims against lawyers.

Rachel Rothwell

Rachel Rothwell

Sir Geoffrey Vos divided lawyers’ attitudes to AI into two general camps. First, the ‘luddites’ argue that AI randomly makes up answers and is prone to bias. This makes it too dangerous and inaccurate to be used in the law, where the public need legal advice and decisions from human lawyers and judges. AI must be limited to the most regulated of circumstances, and only deployed with ‘the greatest care and circumspection’.

Then there is the second camp: those who rush to embrace AI. This school of thought contends that before we know it, clients will be refusing to pay for legal tasks performed by human lawyers when they could be done better, quicker and more cheaply with AI. They argue that large language models are about to become far more reliable. These will be integrated with database technology to reduce hallucination, inaccuracy and bias. Meanwhile, the advantage of AI is that it can process large datasets, summarise legal materials, undertake legal research and resolve complex problems far more effectively than human lawyers and judges. So while some ‘interpersonal tasks’ will still need to be done by humans, the ‘grunt work’ will soon be done by machines.

In his speech, Vos described it as unrealistic to simply insist that AI is too dangerous, or indeed to pretend that we can actually stop using it. But he warned that lawyers must be trained in how AI should and should not be used, and how clients, businesses and citizens can be protected from those who will inevitably try to use it for malign purposes.

He said: ‘There is a genuine risk, bearing in mind the speed at which these technologies are developing, that lawyers and judges will move too slowly to understand and respond to AI and its effects. The school of thought that pretends that it is too dangerous for lawyers and judges to get involved is a real problem. Only if we do get involved and we educate ourselves fully, can we be best prepared to serve the public better in the future AI-enabled world.’

But while economic norms suggest clients will not want to pay for human lawyers to do tasks that AI can do more cheaply, there are limits to this. It is not just about those areas of the law – such as care proceedings – where human qualities like empathy are obviously central. It is also about whether we can have confidence in the accuracy of work performed by AI; which is linked to the ability of human lawyers to check it. This is where ‘machine lawyering’ gets interesting from the professional negligence perspective.

As Vos said: ‘The real challenge is going to be evaluating the reliability of an AI’s work product. If an AI can produce legal advice that is, say, 98% reliable, that might compete favourably with the best of lawyers. But how can we know? By what parameters will we determine that a professional is using all due professional skill, care and diligence when they use an AI that is, say, 99% accurate, but not when using one that is, say, 95% accurate? And, of course, accuracy cannot anyway be gauged on a linear scale. This may become a whole new science in itself.’

Vos noted that in the medium term, machines may have capabilities that make it hard and expensive – or indeed impossible – for humans to check what they have done. ‘This is where professionals need to begin to develop systems to make sure that humans can be assured that what machines have done is reliable and usable, as opposed to dangerous and unreliable. In the law, we will need to explore how the product of a machine can be effectively challenged,’ he said.

We can expect future law books on professional negligence to contain entire new chapters on when a professional might be expected to use AI, and when they should not. But while AI is fast evolving, the pace of change will be controlled by the level of confidence that humans have in these emerging technologies.

Vos observed: ‘Humans seem to have huge faith in the AI incorporated within their computers and mobile devices, but maybe less faith and confidence in sitting in the back of a self-driving vehicle.’

He added: ‘Predicting how things will turn out may prove far more difficult than we think. But I guess that one thing is for sure. Even if the professionals will be damned if they do use AI and damned if they don’t use AI, professional negligence lawyers will be in great demand – whatever happens.’

 

Rachel Rothwell is editor of Gazette sister magazine Litigation Funding, the essential guide to finance and costs.

For subscription details, tel: 020 8049 3890, or click here

Topics