At the beginning of March, the Information Commissioner’s Office (ICO) announced an investigation into a number of social platforms, focused on the ways in which they collect and process the personal data of young people, and measures taken to promote children’s safety. This followed on from an investigation last year, which canvassed a number of the biggest social platforms and those most heavily used by young people, assessing aspects such as the use of age assurance techniques when signing users up to the sites; and personalised advertising based on behavioural or profile data once users were enrolled.
Imgur and Reddit were two of only three social platforms which were found in 2024 not to be using any age verification techniques (including self-declaration) when users signed up with them. The third high profile target for the investigation was TikTok, which has already been the target of concerted litigation in the UK aimed at its use of children’s data including in relation to the delivery of algorithmically targeted content.
The ICO noted that they were co-ordinating closely with Ofcom, given that regulator’s overlapping jurisdiction in this space under the Online Safety Act. Both regulators have made it clear that resources will be allocated, and enforcement targets selected, on the basis of the risk posed to users by certain types of platforms or categories of activity. As such, the protection of vulnerable young users is an obvious early priority.
With the coming into force of the Online Safety Act, age verification or assurance have been hot topics in the world of platform compliance of late. Reddit and Imgur were outliers in a space where social platforms have increasingly been working to, at the very least, put in place some form of robust self-certification of age when users sign up. A number of platforms are going further, using AI or facial recognition tools to identify those who appear not to be the age that they claim, and excluding or expelling those who do not match the intended age profile of their users.
TikTok has comparatively sophisticated age verification measures in place. The ICO’s focus is on the way that young people’s personal data is used once they have signed up and started using the platform. Their particular focus is on recommender systems. These algorithms use a range of behavioural tracking and analysis tools to establish what content on the platform is of particular interest to users, and then to show them more and more of what is judged to be the content that they are most likely to engage with. At its best, the effect of this is to quite rapidly create a seemingly curated stream of personalised content, covering those themes and topics most of interest to the user, and occasionally presenting fresh material based on the preferences of other similar users.
The danger with such recommender systems, though, is that a metric which is based on engagement will not necessarily be capable of distinguishing between positive and negative, informative or harmful, content. One of the concerns that has prompted litigation in the past around such systems is that an individual with an interest in topics such as self-harm, or who is vulnerable to radicalisation, for example, might start looking for content to feed their curiosity and then rapidly come to be deluged in increasingly harmful content as the recommender algorithm tracks their interest in and engagement with such material.
While some of the responsibility for policing such systems naturally falls within the purview of Ofcom under the Online Safety Act, the automated nature of algorithmic processing, and the fact that the recommender systems feed content based on individual online behaviours and preferences, means that data protection law is also engaged, hence the ICO’s investigation. The fact that these specific organisations have been targeted is not necessarily an indication of any breaches of the Children’s Code, or indeed of the UK GDPR. The ICO is as much hoping to learn about the challenges confronting platform providers in this space, as it is looking for a target to inflict sanctions on.
The protection of young people and their data online is a real priority for the ICO, and that their investigations are unlikely to stop with these first few high profile targets. For those whose online services are targeted at young people, or indeed where those services are likely to be particularly enticing to children even if not intended for them, this should be an urgent wake up call. The online safety of young people is likely to remain a priority for regulators for quite some time. With high profile investigations like this showcasing the ICO’s priorities, there will be no excuse for those platform operators, big or small, who do not take their responsibilities seriously.
Will Richmond-Coggan is a partner at Freeths LLP, and the head of the firm’s contentious data protection team
No comments yet