Sir Geoffrey Vos has expanded on his vision of a judicial system transformed by artificial intelligence. In a speech to the Manchester Law Society, Vos encouraged lawyers and judges to get to grips with technologies which will soon be the norm for the legal industry.
In the words of Vos, there is 'nothing scary about AI'. Large language models (LLMs) have no level of abstraction beyond an ability to recognise patterns. They respond to user queries by continually predicting the next word (or ‘token’) in a sequence, but lack the autonomy to go further than this (at least at present). As Professor Butler explains, 'you can tell an LLM to generate a wedding speech in the style of John Keats and it would do so very easily – but it would never occur to the LLM to do that itself'.
That is not to say AI is without risk. In his speech, Vos recognised the need for society to protect itself from 'a very small number of ill-intentioned people', who may seek to misuse AI. While such challenges form part of a wider conversation about the way in which emerging technologies are shaping our lives, data integrity is (and always has been) of particular importance to the legal profession. We must be live to the potential for incomplete, biased or even manipulated datasets to skew AI-generated outputs. As the adage goes: garbage in, garbage out.
So, how can we do this? First, we must use the correct tools for the job. Given that LLMs lack the capacity to assess the quality of source material, users must be satisfied that this is relevant, current and comprehensive. As Vos notes, there is a fledgling industry of specialist legal LLMs, which will continue to flourish as AI becomes ingrained within the profession. As our understanding of AI grows, so too will our virtual toolbox.
Secondly, we must ensure we are using AI in the right way. As Vos noted, it is vital that practitioners understand what generative AI does and what it does not do. It is a common misconception that LLMs function in a similar way to legal databases – the contents of which are curated and verified by practitioners. In contrast, LLM-generated content is not subject to any form of human intervention before delivery to the end user. It is therefore for us to form a view as to the quality of any output. To quote Vos, our work product is, and always will be, our responsibility.
Vos also touched on the well-known risk of ‘hallucination', which plagued some early adopters of AI technology. As generative AI is concerned only with predicting the next ‘token’ in a sequence, it has no comprehension of concepts such as ‘truth’, and so can generate seemingly credible but inaccurate responses. A lesser understood issue is that of ‘uncertainty quantification’ – that is, the level of confidence an LLM attaches to its output. This is a crucial requirement of trustworthy AI and an area of rapid research and development within the scientific community.
While these issues should not dissuade practitioners from adopting AI, we must understand them in order to use this powerful tool in the right way.
As Vos explained, there are certain tasks that AI can perform quicker and more comprehensively than a human operator. With appropriate human oversight and control, AI can be harnessed to prepare initial drafts of legal documents and summarise vast quantities of information. As any litigator will confirm, the potential for AI to predict case outcomes may be of enormous value to lawyers, clients and funders. As AI becomes an integral part of our professional lives, there will be an expectation for lawyers to make responsible use of it.
AI is here to stay and practitioners must, as Vos says, 'get with the program'. There is no reason why the legal community should not embrace technology with such enormous transformative value. However, we can only truly do so with proper insight into its nature, functionality and limitations: such powerful tools must be used in the right way. As Vos says, we must take this journey 'one step at a time'.
Chris Felton and Katie Dyson of Gardner Leader LLP and Professor Keith T. Butler of University College London
8 Readers' comments