In the second of two features on AI adoption, Joanna Goodman looks at the evolution of the technology’s regulation, as the legal sector learns how and when to trust it
The low down
The number of legal professionals regularly using artificial intelligence has more than doubled since July last year. Against its reputation for conservatism, this puts the legal sector in the vanguard of AI use. Yet, as adoption increases, trust in AI, especially generative AI using large language models, is falling. Public dressing-downs for lawyers who trusted its false citations gained wide publicity. Might closer regulation of AI help restore trust? The EU believes so, and has adopted the Artificial Intelligence Act. The UK, by contrast, has backed only light regulation. Such dissonance is a headache for the international legal sector and its clients. But if they can negotiate such uneven regulation, the rewards will be significant.
The European Parliament adopted the EU Artificial Intelligence Act last week. It represents the EU’s first regulatory framework for AI. While other regulation also applies to the development and use of AI technology, the new act positions the EU second only to China in terms of AI-specific legislation. In the UK, AI is not regulated, but firms with international clients and operations have to comply with the regulation in jurisdictions where it is.
The EU act takes a risk-based approach similar to that of the General Data Protection Regulation (GDPR), prohibiting AI uses that pose ‘unacceptable risk’ to fundamental rights, and introducing restrictions and obligations around data usage and transparency. However, while the act places requirements on the companies that produce generative AI models, such as OpenAI’s GPT-4, Google’s Gemini and Anthropic’s Claude, in terms of transparency, risk mitigation and data governance, it does not specifically regulate the organisations that use these models – including law firms.
Meanwhile, more lawyers are routinely using generative AI. A LexisNexis survey of 1,200 legal professionals in the UK published in February found that 26% of respondents are using generative AI tools in their work at least once a month. That is up from 11% in a similar survey conducted in July 2023.
Growing familiarity with generative AI software is increasing awareness of its risks, which are highlighted by the EU act and in published guidance from the Information Commissioner’s Office, the Solicitors Regulation Authority and the Law Society.
LexisNexis has identified hallucinations (57%), security (55%), and trust (55%) as the biggest barriers to adoption. While ringfencing applications and data, standardising prompts, and introducing guidance and usage policies can mitigate risks around accuracy and security, trust is a broader challenge for AI adoption. According to data from public relations consultancy Edelman, trust in AI companies globally has dropped to 53%, down from 61% five years ago.
‘Trust is a touchstone for the whole AI revolution,’ says Matt Hervey, partner at Gowling WLG, who sits on the City of London Law Society AI committee. ‘People have an unconscious bias that they can trust the output of a computer because they consider it to be objective and reliable. But generative AI is statistical not deterministic, so there is a danger of people trusting its output when it is not appropriate to do so.’ He cites widely reported instances of lawyers and litigants in person being caught out using ChatGPT-generated non-existent citations and submissions in court.
Explainability as required under the EU act is a challenge for large language models. ‘In order to produce an output, a [generative] AI model might perform a trillion calculations, so there’s an open question about how it reached a decision and whether that was adequate,’ Hervey explains.
‘To ensure trust, we have to move away from thinking purely about the operation of an application. It’s about the context in which you use it, the safeguards that are in place, and the level of human oversight, and that’s where regulation, guidelines and best practice come into play. Consequently, the EU has introduced cross-cutting legislation, and the UK government has asked all 15 regulators to present detailed plans for regulation by the end of April.’
He adds that law and other professions need to comply with AI-specific regulation (such as the EU act and legislation in other jurisdictions, notably China and Dubai) alongside guidance from professional regulators such as the SRA, which published a risk outlook report at the end of 2023.
As Joe Cohen, director of innovation at international firm Charles Russell Speechlys, observes, the UK has no specific AI regulation. Although there is speculation around adopting something similar to the EU act, there is also some consensus outside the EU that it is too onerous, particularly on explainability.
‘You can ask a large language model for its logical thought processes, but the EU AI Act is asking for the exact inferences that the machine is making,’ Cohen says. ‘We need to strike a balance that doesn’t stifle innovation, which right now means AI adoption.’
Regulation is the bedrock
Arnav Joshi, senior technology lawyer at Clifford Chance, gave evidence on AI regulation to the House of Lords Communications and Digital Committee. He aligns with Cohen: ‘The EU is a regulation-first jurisdiction. It already has the GDPR, the Digital Services Act, and the Digital Markets Act, and the latest AI act has been in development since 2019.’
Joshi says this type of ‘horizontal’ legislation does not target specific industries or sectors. Rather, it regulates the application of technology and its outcomes – that is, what are the worst harms that we need to prevent, and what standards do we need to uphold? ‘For something as broad as AI,’ he says, ‘we should not necessarily put specific guidance into regulation. Rather regulation is the bedrock. Then it is the job of the regulators and the regulated entities to come up with more specific guidance.’
While the government’s approach to AI regulation is ‘wait and watch’, UK businesses and law firms that operate internationally will need to comply with global standards, which include AI laws in Europe and China.
Lawyers are well aware of the need to maintain their status as trusted advisers, while realising the new opportunities presented by emerging technologies like generative AI.
‘I’m not sure that the legal industry has ever seen a similar shake-up,’ says Joshi. ‘As lawyers, we have an obligation to serve society which includes a duty of care, but also delivering efficient, cost-effective advice. There are immense backlogs in corporate law and in the criminal justice system. The use of AI to deliver services in a smarter manner is definitely the way forward. However, automation can be a double-edged sword so a sensible approach to AI adoption is to start with low-risk tasks.’
Dr Catriona Wolfenden, partner and director of product and innovation at Weightmans, considers regulation as ‘a good thing, promoting accountability and trust, but because technology moves so fast, it is important to have a flexible framework, and not refer to specific technologies that soon become obsolete’.
Wolfenden believes that the EU AI Act is taking a product liability approach to AI, potentially forcing new technologies into existing legal frameworks. ‘For law firms, any regulation needs to take into account future development so that it doesn’t stop us using technology to deliver better services to our clients, which is why we need malleable regulation.’
As Michael Kennedy, senior manager, innovation and legal technology at Addleshaw Goddard, points out, under the EU act law is not a high-risk area. ‘Ultimately, generative AI is just another cloud software tool that uses machine learning,’ he says. ‘For a big commercial law firm, the EU act represents guidance rather than a strict set of rules. We don’t work with much personal data, we have the resources to build our own internal tools and we are serious about data processing and data protection. So regulation won’t change too much for us. However, consumer-facing AI apps represent a higher risk because they work with personal information.’
Data odyssey - down in the archives
A case in Manchester last year in which a litigant in person was pulled up for presenting false citations to court, having used ChatGPT to find relevant precedents, highlighted ChatGPT’s tendency to hallucinate. This underlined the importance of checking the outputs of any generative AI platform against the original data source.
‘The problem is that litigants in person aren’t in a position to check ChatGPT outputs,’ says Paul Massey, founder and CEO of Tabled. Massey was inspired to set up Project Odyssey – refining The National Archives legal data for access to justice. Supported by Innovate UK, the project will enable The National Archives to maximise the utility of its legal data, which includes legislation and case law, and create a publicly available access to justice app.
Tabled is leading a consortium which includes Keele University, Swansea University and Northumbria University, and legal explainability start-up Legal-Pythia. It will refine the dataset by adding metadata and legal context such as obligations and conditions, and fine-tune large language models to interrogate it. The user interface will be a generative AI chatbot which will help people facing legal issues. Tabled’s workflow platform, which includes a decision-tree builder, will provide a structured entry point into the app.
‘Access to data is important and one of the reasons we chose to work with The National Archives is that the data is owned by the Crown,’ explains Massey. ‘When the project is finished, anyone will be able to obtain a licence from The National Archives to use the refined data.’
The project will also create an automatic annotator, so that as new data is added to the dataset, it is continually updated. While the dataset is already machine-readable using LegalDocML, Project Odyssey will enable it to be interrogated by large language models, and serve as training data to improve them.
AI adoption and trust
Wolfenden believes regulation, policies and other vehicles for accountability foster the chains of trust which enable legal processes and tasks to be entrusted to AI applications.
Clifford Chance’s approach to AI adoption focuses sharply on upholding the standards of service delivery that underpin its global reputation. ‘The outcomes cannot be different when we use AI, but AI helps us work faster and more creatively,’ explains Joshi.
‘For example, using Copilot instead of typing up meeting notes gives trainees time to do some real legal thinking. We have established principles and policies around AI use and we always start small. We use client information only when clients consent and are comfortable with it. We build in-house systems that are robust and secure to leverage the collective hive mind of one of the largest law firms in the world. And we are now starting to innovate with those systems.’
'The biggest risk of generative AI is misinformation and disinformation, and what that means for our regulated professions'
Arnav Joshi, Clifford Chance
For example, CC Assist is the firm’s in-house version of ChatGPT, built in collaboration with OpenAI. ‘Over the next few years, the starting point of research will no longer be a database or a template, or asking a trainee to research something from scratch,’ Joshi says. ‘Rather it will be to ask an AI assistant, which will instantly find the initial information you’re looking for. However, this is only the starting point, perhaps 20-30% of the work and it’s up to you to use the time saved to refine the output.’
Joshi notes: ‘The biggest risk of generative AI is misinformation and disinformation... The legal knowledge generated by lawyers and courts is one of our most reliable information repositories. Any risk of poisoning that body of knowledge represents a serious risk to our industry – and to our democracies – which is why you don’t see law firms using AI to deliver legal advice.
‘The well-publicised examples of lawyers presenting ChatGPT hallucinations as evidence in court serve as cautionary tales to the rest of the industry. Trust will always be a hugely important factor, so while it makes sense to use AI to do 10% or 20% of legal tasks, we cannot risk our professional standards or reputation and the trust that has been built up over hundreds of years.’
This year’s large language model
The legal sector has been in the vanguard of generative AI adoption. The top use cases were drafting legal documents at 91% (up from 59% in July 2023) and researching legal matters at 90% (up from 66% in July 2023).
Given the rapid take-up of generative AI, what are the next steps for law firms as the technology evolves and matures? Having conducted a comprehensive pilot of generative AI tools as part of its buy-and-build strategy, Addleshaw Goddard is focusing on integration.
Kennedy has a current example. ‘We were working with CoCounsel when it was acquired by Thomson Reuters, and we also use Thomson Reuters systems, so it’s important that they get the integration right. LexisNexis are also integrating AI into their systems. We are also looking at using generative AI to search across different knowledge and information repositories.’
Weightmans has conducted a few trials of generative AI applications, but much of it is unconvincing. ‘Some of it feels like technology looking for a problem to solve, or suppliers feeling that they need to add generative AI to their products,’ says Wolfenden.
She echoes the theme of legal tech ‘AI washing’ that has been called out by legal tech customers and investors. This entails repackaging existing applications as ‘AI-powered’ or adding a wrapper to ChatGPT and presenting it as a new product.
Weightmans recently secured funding to look at trust in the context of emerging tech to work out the best way to deliver pilots, while keeping users receptive to new technology, especially as generative AI’s tendency to hallucinate can lead to a loss of trust. Wolfenden is planning future pilots, roll-outs and use cases keeping trust top of mind. ‘Not every pilot will work, but we need our user groups to stay receptive,’ she adds.
Cohen at Charles Russell Speechlys has moved on from fine-tuning GPT large language models to training smaller models with fewer parameters. He says: ‘My big thing for 2024 is a “mixture of experts” approach – for example, you have an AI at the top that triages the query and underneath you have a selection of smaller models that each do different things.’
This produces a tailored approach, while using less energy than large language models, he adds: ‘A big problem with generative AI is that it uses an enormous amount of energy. Because the models have so many parameters, each call uses a lot of energy. So unless there is a huge breakthrough with the Nvidia chips that large language models use, the way to use less energy is to use smaller models. Rather than using the entirety of GPT-4 for each call, you could use triaging to send each query to the right AI and expend less energy. The question is when will we have a model that is good at summarising, another that is good at legalese, and another that is an expert in construction or IT? We should be training specific models to do specific things.’
‘Adoption and engagement is always a challenge,’ says Cohen, who joined Charles Russell Speechlys at the end of 2023. ‘Since I joined the firm I have met more than half the partners to talk through their plans to become more forward-looking and innovative. I looked at what the lawyers were doing, and gaps in the legal tech portfolio. The way to engagement is not to start by asking for use cases, as it takes forever to get a good list and it will probably be unrealistic because until people have used AI they won’t know how they can use it.’
To engage lawyers, Cohen says, ‘start with broad recommendations – dos and don’ts, like don’t plug in personal data, but you can use documents if you have consent – and let people play with the technology’. Then, he adds: ‘Document how people are using it and compile a list of use cases and prompts to send around to the people who haven’t used it.’
Kennedy at Addleshaw Goddard highlights the importance of gaining lawyers’ trust to keep them engaged in piloting new products. ‘Part of building people’s trust is making sure that we are not wasting their time,’ he explains. ‘We gave them clear parameters and time frames for testing. We trained them to use products alongside their usual work and provide feedback. We looked at 75 tools, but we only tested six because we were respectful of our lawyers’ time.’
The Addleshaw Goddard innovation team also does a lot of outreach work, with workshops in different offices, and webinars educating clients and stakeholders about generative AI and its legal use cases. Ultimately, the key to engagement with generative AI is the human factor: education, communication, respecting people’s time and managing their expectations.
The first AI adoption feature can be read here
No comments yet