'Jolly useful’ was Lord Justice Birss’ verdict on ChatGPT, after he used the generative AI chatbot to help him draft a summary of an area of law for a 2023 judgment. Opinions vary on whether there is a place for generative AI in the courts, but Lord Justice Birss gave us an early blueprint on responsible AI use (as later reinforced by judicial AI guidance issued in December 2023).
Following the first golden rule of responsible AI, the judge selected an appropriate task for ChatGPT, based on his understanding of its capabilities. He generated an output that he could verify using his own expertise, saving him time without disrupting the court’s fact-finding or decision-making remit.
Importantly, Lord Justice Birss also retained accountability for his use of ChatGPT. The judgment, including any errors, is attributable to him. If I permit myself one critique, it is that the judge missed an opportunity to promote transparency by citing his use of ChatGPT in the judgment itself.
Contrast this with a recent, and in my view much more troubling, use of generative AI in the courts of the Netherlands. This summer, a judgment was issued by the Gelderland sub-district court in a dispute in which three matters of fact were at issue: (1) the average lifespan of solar panels; (2) the current average electricity price; and (3) whether certain insulation materials remained usable.
The court stated in its judgment (as translated from the original Dutch) that it decided on these matters 'partly with the help of ChatGPT'.
With the caveat that I am not a Dutch lawyer, this strikes me as a problematic use of generative AI in a judicial setting. The first issue is the lack of transparency as to which version of ChatGPT was used, which prompts were deployed, and precisely how the outputs contributed to the court’s reasoning alongside any other sources used.
As a result, we are left to assume that the court simply prompted ChatGPT to state that the average lifespan of a solar panel in a given set of circumstances and then used the chatbot’s response to inform its conclusions.
Why is this problematic? By way of simplified explanation, the model underpinning ChatGPT predicts likely combinations of words based on patterns in its training data (which is understood to include large volumes of text from the internet, books and other published sources). ChatGPT is neither a search engine, nor a reliable data analysis tool, nor a source of reproducible facts. Instead, it would have been interesting to see how the court might have used a more appropriate form of AI (such as conventional machine learning from a closed dataset) to support its analysis of a set of traditional sources including manufacturer information, industry reports, price data and human expert opinion.
As the AI hype continues, we must not forget responsible AI principles such as transparency, accountability and selecting an appropriate tool for the task at hand. This is even more critical when legal rights, remedies and access to justice are at stake.
Emma Haywood is a solicitor specialising in technology law, and is director and principal consultant at Bloomworks Legal.
No comments yet