The exponential rise of GenAI heralded a deluge of new security threats – are law firms up to the challenge?

Most UK law firms are now using artificial intelligence in some capacity. Firms also anticipate an increase in its use across core processes – document drafting and review, onboarding and compliance, workflow management and e-discovery. AI is handling legal tasks, but it is also introducing sophisticated security risks.

Joanna Goodman

Joanna Goodman

New skills such as prompt-engineering are used to mitigate the risks of generative AI (GenAI), along with limiting search parameters, ring-fencing confidential data and checking outputs for hallucinations. Mobile apps use multi-factor authentication for onboarding, compliance and client communication. But how far do these take into account the new threats presented by allowing AI to reach deep into the legal workplace, as the Financial Times described it this week?

There is much discussion about the legal issues surrounding AI and questions of accountability and liability. However, significantly less attention is paid to the fact that because GenAI is accessible, affordable and relatively easy to use, it creates complex security challenges for law firms and their clients. AI supports efficiency, but it also facilitates cyberattacks and fraud.

Another AI arms race

At the Institute of Chartered Accountants in England and Wales’s International Anti-Corruption Day roundtable on 9 December, Oli Buckley, professor of cyber security at Loughborough University, cited two examples of AI-powered fraud. In September 2024, research by Starling Bank found that 28% of UK adults said they had been targeted by an AI voice cloning scam (where fraudsters use AI to replicate the voice of a family member, friend or colleague) at least once in the previous year. On the corporate side, in January 2024 a Hong Kong executive at engineering firm Arup was tricked by an AI-generated video call using a digitally cloned version of a senior manager into sending £20m to a criminal gang.

While there are sophisticated solutions that authenticate images and detect cloned voices, AI technology is developing so rapidly that detection systems are constantly lagging behind. As Buckley observes, this is because fraudsters are not constrained in how they acquire and use technology, unlike their victims in large enterprises and professional services. There is an arms race on both sides of the AI equation.

GenAI supports business efficiency but it also enables fraudsters to target and scale relatively straightforward scams. Prompts help scammers write in a more convincing and polished way, making fraud harder to detect, explains Buckley: ‘Phishing emails have been around for a long time. Scammers now have the tools to make them better, and using agentic AI they can schedule them at times when people are more vulnerable. Deepfakes are used to produce authentic-looking ID and insurance documentation to confound compliance and identity checks.’

What is the best approach to avoid AI fraud? Peter Wright, managing director of Digital Law, and co-author of the Cyber Security Toolkit, published by the Law Society, says: ‘Having checks in place around identifying your client, but also ringfencing critical decisions – such as cash transfers, transfers of sale proceeds etc – should already be in place.’

Know what you’re getting into

'Initial AI projects should be scoped very narrow and simple so that everybody involved learns how to make viable AI solutions before scaling them to more sophisticated use cases'

Ian Broom, Fliplet

Ian Broom, CEO of app development platform Fliplet, says another AI-related security threat is the possibility of attacks on AI vendors storing sensitive data, leading to data leaks or ransom attacks on clients.

Ian Broom

Ian Broom, Fliplet

Broom recommends ‘a low-risk scenario for testing AI. This gives you the ability to be creative without concerns that sensitive data will leak. For example, use agents to determine likely security issues and report them, then start assessing the low-risk issues before progressing to more complex ones once you are confident you can control the agent. Initial AI projects should be scoped very narrow and simple so that everybody involved learns how to make viable AI solutions before scaling them to more sophisticated use cases. For example, don’t create an agent to analyse all your contracts initially, build a simple tool that humans can run on a single contract to ensure it works then slowly scale it up.’

‘As firms embrace GenAI they are using more APIs [application programming interfaces] and plug-ins, so they broaden the attack vector for possible vulnerabilities,’ observes Buckley. Unsurprisingly, law firms that are leading in AI adoption have robust supplier due diligence processes and dedicated cyber/information security teams.

Michael Kennedy, head of innovation and legal technology – research and development at Addleshaw Goddard, assesses legal AI tools as part of his remit: ‘Ensuring you do proper supplier due diligence is crucial and, specifically with AI providers, making sure that you are aware of and treating sub-processors of data the same as any other supplier. It is important to know where your data goes. You must properly understand the data flows, making sure that any assurances given by a supplier in relation to data centres and storage (including storage of prompts, documents, input and output data) is captured in the contract. Supplier due diligence is not just a one-off. A genuine robust supplier due diligence programme means ongoing monitoring, whether this is being aware if suppliers are introducing new AI features that may require further contractual protections or checks, or monitoring compliance on an ongoing basis. This can help control against introduction of AI agents to your firm’s environment.’

Wright recalls advising a client that was using an AI administrative assistant that joined video conference meetings and recorded everything. Third parties on the call could not disable the recording or obtain copies of it without opening an account with the provider. Wright explains that this type of service – automatically attending and recording meetings and generating notes and transcripts without obtaining the prior consent of all participants – is in breach of GDPR and the UK Data Protection Act.

Wright also underlines the need to think about fail-safes. He references a recent story about a tech entrepreneur who nearly missed a flight because the autonomous Waymo taxi that was driving him to the airport circled the airport parking lot eight times. He had no way of stopping the vehicle and had to call customer service to resolve the software issue that had caused this problem. Wright observes: ‘Customer service is often non-existent on these systems and AI… still struggles with complex tasks. If you are using AI, you should know what customer support is like. Is the system truly reliable for the task you are giving it?’

Who’s in charge?

The All-Party Parliamentary Group on AI meeting on redefining government and welfare with AI took place on Monday, one week after Sir Keir Starmer announced the government’s AI Opportunities Action Plan. While evidence givers recognised AI’s value to public services, there were questions over the government’s ability to realise its AI ambitions. Rachel Astall, chief customer officer at Beam, a social enterprise that helps homeless people into jobs, described how automatically generated reports were supporting community welfare services by cutting the burden of paperwork. Hugh Eaton, former soldier and expert adviser at the Boston Consulting Group, observed that 44 of the 50 recommendations in AI opportunities adviser Matt Clifford’s report had been awarded to the under-resourced Department for Science, Innovation and Technology. An important question not answered was: who in the government is actually in charge of AI and, importantly, who is the budget holder?

Waymo taxi

Drive by: passenger nearly missed flight after autonomous Waymo taxi circled airport car park eight times

Double agents?

Agentic AI has intrinsic security implications. While GenAI requires a prompt for each task, agentic AI can autonomously execute multi-step tasks that involve decisions. Broom explains: ‘Agents dramatically increase the complexity of attacks as [agentic] AI can automatically adjust its approach based on the data it receives. Agents will need guardrails, lots of testing and potentially humans in the loop to ensure they don’t do undesirable tasks.’

Agents can be positive or negative when it comes to IT security, he adds: ‘Positively, they could [be used to] continuously check for security issues, report them, and continuously assess whether or not the fixes that are applied are still working. It could be like a continuous, intelligent penetration test without the cost and time associated with human-led penetration tests. Negatively, an attacker can use an agent continuously running to pry for security holes and intelligently analyse the data that’s returned to come up with more and more sophisticated attacks that are much harder to defend against.’

‘Any agentic AI that acts on our behalf should have tightly controlled privileges,’ advises Buckley. ‘I’m thinking of scheduling, drafting, or limited file access instead of an all-access pass. It’s crucial that the solution logs every action the AI takes, so there is a clear audit trail and [the IT function] can quickly step in if something looks off.’ He explains that the risk increases depending on the agentic AI’s level of autonomy. ‘Making plans and executing decisions is minimal risk, but sending emails and interacting with databases is high risk.’

Out of the shadows

Wright and Buckley highlight the danger of ‘shadow AI’ – similar to shadow IT which evolved as personal computing overtook enterprise systems in speed and capacity. In terms of IT security, this is the danger of people using public AI tools such as ChatGPT to work on confidential/client information. ‘Law firms should be careful about using public AI tools,’ says Kennedy. ‘[Our internal] AGPT helps us avoid this issue, but where possible you must be able to monitor or control what people do with information. Tools like Copilot also offer a good alternative, as they are hosted on your Microsoft environment.’

 

 

Topics