A recent Reuters and Oxford University poll found that a very small percentage of the 12,000 adults studied across six countries, including the US and UK, were using artificial intelligence products (such as Chat GPT by OpenAI) on a daily basis. So why the hype?
First, AI is integrated into a far wider aspect of our online lives than we think, from Amazon Alexa to navigation apps, advertising and spam filters. Second, bucking the trend in the Reuters report were, unsurprisingly, 18- to 24-year-olds. Most importantly, the hype is warranted by the performance of the products.
While recognising that generative AI is only a small subset of AI tools, anyone who has used ChatGPT 4 (for example) will understand that, as a means of providing information, it makes a traditional Google search look like child’s play. No wonder it was the fastest application to reach 100 million monthly active users.
So, if generative AI is going to dominate our online information tools, what impact will this have on our reputation, data and privacy rights? During the mid-2000s, defamation and privacy lawyers found our attention diverted away from cases against newspapers and, less frequently, broadcasters, towards the US corporates providing search engines and social media platforms. Google’s near-complete domination of the search engine space meant that the first page of a Google search became the key battleground for brands and personal reputation.
Of the multitude of cases launched against Google, Google Spain (Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González [2014]) had perhaps the greatest impact for those individuals seeking to manage their reputation. Following that judgment, Google had to establish a mechanism to allow the removal of information in accordance with data protection principles, including the erasure of old, irrelevant or inaccurate information and giving effect to a ‘right to be forgotten’. Lawyers and clients alike, quite properly, used this tool to ask Google to remove unwanted links from search results and, in the UK, can appeal to the Information Commissioner’s Office (ICO) or courts if Google refuses to cooperate. The words ‘some results may have been removed under data protection law in Europe’ now appear on almost all such searches.
How does our experience of Google help us when generative AI searches are the future? AI models are trained on a vast range of text from the internet and other publicly accessible text. They do not store this as ‘information’ like traditional databases, but identify complex patterns learned during their training. Once trained, the model does not retain a record of where it learned its information. This means that it is not usually possible to locate and remove a specific fact or detail (for example, an incorrect fact about a high-profile, public individual). However, AI providers, such as Google, are data processors and clients can use the GDPR to seek to regulate how their personal data is processed. Indeed, if you ask ChatGPT about enforcing your data rights, it helpfully points you to a process similar to the Google Spain removal system.
While asserting your data rights is unlikely to guarantee the complete removal of specific types of content in all instances, ChatGPT will need to ensure that it reduces the likelihood of problematic content surfacing in future responses. Furthermore, by managing your online profile by traditional means, including via Google removals, you are helping to improve generative AI outcomes, which rely on online sources for their responses. Absent any online source, generative AI responses can contain dangerous ‘hallucinations’, created text, in order to provide a response.
What about the data rights of the users of AI services, rather than the subjects of searches? Chat GPT, like most online services, publishes extensive information on how data is used. It asserts that it does not utilise users’ data for selling its services, advertising or building profiles of people, and it provides settings to manage data retention. However, data is used to ‘train’ the AI algorithms to make its product more ‘helpful’. How this works in reality will remain to be seen. Little is known about the workings of OpenAI, the organisation developing ChatGPT. And its promises and your rights are potentially academic in the face of the challenges of enforcement. The ICO may yet find its teeth but it is not currently a feared adversary for the US tech giants. Likewise, following the Supreme Court judgment in Lloyd v Google (UKSC 2019/0213) the prospect of a group claim in the UK courts that could hold them to account is now extremely remote.
Perhaps we will see a resurgence of defamation claims if generative AI replaces Google searches. By producing original text by way of its responses, rather than simply providing a list of links to third-party websites, ChatGPT and others will struggle to avoid responsibility as ‘publishers’ of defamatory content. When setting up your ChatGPT account you are told to ‘check your facts’ and warned that it may not ‘give accurate information’, but these warnings are unlikely to absolve it of legal responsibility for any defamatory information it publishes.
Proving that ‘serious harm’ has been caused by a publication, as required by section 1 of the Defamation Act 2013, will be a challenge for claimants, as will the US Speech Act (creating a bar to enforcement of foreign libel awards in the US). However, we may not be waiting long for the first UK defamation claim against a generative AI product.
Changes in media and technology, and the uncertain consequences of those changes, are often followed by high-stakes litigation before some form of ground rules emerge. A change of this scale makes the likelihood of disputes inevitable. Watch this space.
Dominic Crossley is a partner and head of dispute resolution at Payne Hicks Beach, London
No comments yet