The master of the rolls was preaching to the converted at the latest LawtechUK event. His message was clear: artificial intelligence is here to stay and the profession should be prepared to embrace it
It was a congregation of the converted. A quickfire poll of the 165 attendees at the latest LawtechUK event revealed not only that everyone was an artificial intelligence user (perhaps more significantly, they knew they were an AI user) but that the vast majority had done something with the technology over the past day. So when master of the rolls Sir Geoffrey Vos spoke of the imperative to build bridges between AI sceptics and enthusiasts in the legal world, it was fairly clear whom he was expecting to go forth and evangelise.
The message to take to the sceptics? Don’t let ‘silly examples of bad practice’ – such as the fictitious case authorities famously spotted by a New York judge – cause you to shun the technology in its entirety. The wider world is embracing generative AI: there is no way that the legal profession can remain aloof. ‘There is no real reason why we should not embrace AI’ – albeit ‘cautiously and responsibly’, Vos said.
Responsible use means following three basic rules:
- Understand what the technology does and doesn’t do. Large language models (LLMs) generate their output by predicting the most likely combination of words rather than referring to an authoritative database.
- Avoid putting confidential information into public LLMs ‘because doing so makes the information available to the world’.
- Finally, any document or summary produced by an AI program must be carefully reviewed. ‘You are responsible for your work product, not ChatGPT,’ Vos said.
Acting responsibly is at the heart of proposals published by campaign group Justice to regulate the use of AI more broadly across the justice system. In a 50-page document from campaign group Justice, author Sophia Adams Bhatti recommends a ‘rights-based approach’, drawing on established and enforceable rights.
She told the LawtechUK event that anyone looking to use AI in the justice system should follow two core principles: ensure the tool is clearly aimed at improving one or more of the justice system’s core goals; and that human rights are embedded at each stage of its design, development and deployment.
‘There’s a duty on all of us to deliver a justice system which upholds human rights and the rule of law wherever you sit in the supply chain,’ she said.
'Artificial intelligence is already being used in many jurisdictions for some of the purposes that the New South Wales guidance says it should not be. I doubt we will be able to turn back the tide'
Sir Geoffrey Vos, master of the rolls
Of course, all this is happening at a time of worldwide debate about how blossoming AI technologies should be regulated. Vos contrasted the relatively relaxed approach of the England and Wales judiciary with that of the Supreme Court of New South Wales. Last week it published a practice note effectively banning many uses of generative AI, including generating the content of affidavits and witness statements. Too late, Vos suggested: ‘AI is already being used in many jurisdictions for some of the purposes that the NSW guidance says it should not be. I doubt we will be able to turn back the tide.’
For a gathering of presumably gung-ho AI enthusiasts, this week’s event was notable for the lack of outlandish proposals about AI removing humans from the legal scene – or even sound the death knell of the billable hour. ‘It’s about the little wins,’ said Dee Masters of Cloisters Chambers, who reckons Microsoft’s CoPilot saves her five hours a week in her employment tribunal caseload.
‘It’s very much like having an enthusiastic and capable paralegal sitting next to me,’ she said. ‘Imagine what it can do if tailored to use by a judge.’ Masters stressed she was not talking about ‘robo-judges’ but about the possibility of speeding up hearings. AI could ‘hoover up evidence and at the end of the day generate a gap analysis’, she suggested. This could transform proceedings in the employment tribunal, the pace of which is today limited by the speed of the judge’s typing or handwriting.
The trouble is that even such apparently modest uses of generative AI can throw up tricky legal questions.
At the event, the master of the rolls revealed that the UK Jurisdiction Taskforce, which he chairs, is working on a statement on some of the legal issues raised by the widespread use of AI, along the lines of its 2019 statement of the law covering crypto-assets and smart contracts. It will answer questions such as ‘how does vicarious liability apply to loss caused by AI?’ and ‘when can a professional be liable for using or failing to use AI in the provision of their services?’.
The answers will be eagerly awaited.
No comments yet