We are being sold the benefits of artificial intelligence in legal services at every turn. The master of the rolls has taken the lead in a series of speeches to prepare us to accept its brave new world. The lord chancellor suggested just a few days ago that AI may be introduced in courts to help make them more productive.
We can recite the benefits in our sleep. Since lawyers are seen as expensive, since there are legal deserts and horrendous court backlogs, AI will make legal services more accessible to all, reduce costs and deal efficiently with piled-up cases.
We do not talk so much about the downsides, which are overwhelming. We speak a bit about algorithm bias, the nonsense produced by hallucinations, and even the risk that we will soon not be able to tell truth from falsehood because of high-quality AI fakes. But here are four further risks, which I consider more serious.
First, the quantity of electricity needed to sustain AI is gigantic. At a time when the trend is towards greening the world to save the planet, AI’s needs for its data centres are going headlong in the opposite direction.
Global energy consumption by data centres is expected to more than double by the end of the decade because they need large amounts of electricity to power them and keep them cool. Traditional supply is expected to be insufficient, and so the big tech companies are, one after the other (Microsoft, Amazon, Google), doing deals with nuclear power plants. Nuclear energy stocks have hit a record high as a result. These deals need time to come online, with fossil fuel used in the meantime. Nuclear energy may not have climate consequences, but its nuclear waste and the danger from accidents or war hold equally grave environmental risks.
Questions: What is the trade-off between the future benefits of AI and the ongoing risks to the environment? Have we been asked about this? Have we given our consent?
Second, there is the fact that the large language models behind generative AI are based on – let us not mince words – the theft of human production so that the machine can be trained in what we do and then do it instead of us. Humans are in general not being paid for the use of their material as training, nor credited. There is mass litigation by authors, owners of images, actors, newspapers and other groups to try and remedy this wrong. It is still not clear how it will be remedied – under intellectual property, data protection, crime or contract (for us, the remedy has to be largely sought against private companies headquartered outside our jurisdiction). We should not pretend that AI is some saintly good coming to rescue us.
Question: Do we care if the material coming out of generative AI has been produced by someone else, taken from them without consent or payment, and is now being used by us as if it is our own?
The third area of concern is ownership, which is a direct result of cost. It is unbelievably expensive to develop machine-learning models (data centres, energy use, talent recruitment, infrastructure and so on). In the latest quarter alone, Amazon, Microsoft, Alphabet and Meta poured a combined $52.9bn into capital expenditure. Venture capital firms have invested $64.1bn in AI startups so far in 2024. A few companies will own nearly all of AI. Governments cannot compete (and in the eyes of many should not compete), but, worse, governments are struggling to regulate these super-rich giants. Governments want the private investment from these companies, while also wanting to regulate them. We have seen with the billionaire ownership of X/Twitter how algorithms can be manipulated.
In the case of generative AI legal services, these machines are being trained on huge quantities of US material, and trained outside our jurisdiction.
Question: what are the consequences of the ownership of such all-powerful machines being in the hands of a few private companies (with, for us again, the added twist of the companies being headquartered outside our jurisdiction)?
Finally, there is the fourth and maybe biggest risk of all – AI running out of human control like the computer Hal in 2001: A Space Odyssey, with possibly apocalyptic consequences. This is a fear shared by some senior people in AI, including a recent Nobel prize winner.
The Panglosses of the legal world may continue singing that AI is the best in the best of all possible worlds. But their happy song does not take into account its other side. I know that we all use AI in everyday transactions, that the momentum feels unstoppable, and that the Law Society must continue to deal with it. But maybe we should all cry ‘Stop!’ and think about the consequences.
Jonathan Goldsmith is Law Society Council member for EU & International, chair of the Law Society’s Policy & Regulatory Affairs Committee and a member of its board. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society
5 Readers' comments