Judges should not rule out the careful use of artificial intelligence, the head of civil justice has said - while warning of the technology’s potential risk to privacy, accuracy and fairness. Sir Geoffrey Vos, master of the rolls, was welcoming the publication of ‘timely and important’ guidance on generative AI, machine learning and other AI technologies.
'The judiciary must embrace the adoption of developing technologies in our justice system, whilst ensuring that AI is used safely and responsibly,' Vos said. 'AI can and will enable us to develop a digital justice system that is efficient and accessible for all.'
The six-page guidance note was produced by 'a cross-jurisdictional judicial group', HM Judiciary said. It will be updated as and when necessary. It appears just days after a tribunal judgment revealed that a litigant in person had attempted to boost their case by citing authorities which turned out to have been generated by a large language model system. The guidance warns that such tools can 'make up fictitious cases, citations or quotes, or refer to legislation, articles or legal texts that do not exist’.
While noting that AI tools such as technology assisted review (TAR) 'have been used by legal professionals for a significant time without difficulty', the guidance warns that judges may have to warn individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations generated with AI assistance.
Judges, meanwhile, are warned that anything they type into a public AI chatbot could become publicly known and that any unintentional disclosure of private information be reported.
Summing up the guidance’s message, Vos said: 'Judges do not need to shun the careful use of AI. But they must ensure that they protect confidence and take full personal responsibility for everything they produce.'
6 Readers' comments