On 7 January 2025 Meta made sweeping changes to its policy on hateful conduct community standards (the standards). This article examines how these changes put marginalised groups at serious risk and how they, in the context of the Online Safety Act 2023 (the Act), are in breach of duties to prevent and protect users from harm.

Suneet Sharma

Suneet Sharma

In particular, these changes allow LGBTQ+ persons to be called mentally ill, transgender people to be called 'it' and women to be referred to as 'property' in user-to-user communications on Meta’s platforms such as Facebook and Instagram.

Relevant provisions of the Online Safety Act 2023

So how do these changes sit given the framework of the Act?

The Act came in to force on 26 October 2023, and many of its provisions are still in phased implementation. As user-to-user services, both Facebook and Instagram come under the purview of the Act.

Section 7 of the Act places a duty of care on user-to-user service providers such as Meta. More particularly, s.7(2) of the Act sets out that Meta must comply with duties regarding illegal content set out in s.10(2) to (8) of the Act and also duties about complaints procedures set out in s.21.

Meta

Source: iStock

Application of the provisions of the Online Safety Act 2023

Where harassment which meets a criminal threshold occurs, such as calling someone such hateful things as the standards allow, Facebook and Instagram’s owners have a duty to prevent individuals from encountering such content and should mitigate or manage the risk of those platforms being used for the commission of such priority offences.

Indeed, the sentencing guidelines for such offences note that if these offences are committed by demonstrating hostility based on presumed characteristics of the victim, including sex, sexual orientation or transgender identity, these are factors which demonstrate high culpability in the commission of such offence, potentially justifying a finding of high culpability and impacting sentencing.

However, here is Meta, allowing the commission of such offences by making explicit provision that these statements are allowed on its platform. Also, adding insult to what may result in actual injury, it attempts to justify this 'given political and religious discourse' in an LGBTQ+ context.

Being homosexual was declassified as a mental disorder by the World Health Organisation (WHO) in 1990 and in 2019 WHO reclassified transgender people’s gender identity as gender incongruence, moving it from the mental health and behavioural disorders chapter to conditions related to sexual health.

Yet, Meta still thinks it’s acceptable to equate LGBTQ+ people to being mentally ill?

Section 10(2) is notably limited to take or use 'proportionate measures' - in the cases of Instagram and Facebook these user-to-user services are clearly the most sophisticated and wide-ranging services there are. As such it is easily arguable that having policies that entrench the protection of users at the outset, prevent such content on their platforms and allow for complaints where users have been subjected to such comments to be upheld rather than dismissed, must be in place or the service provider must face the consequences of breaching the Act.

Indeed, my hopes are that, as the polices are worldwide, online safety laws will intervene in such pernicious changes which further marginalise those at risk and expose them to abuse at the whim of political pandering.

Non-compliance with any regulatory action from Ofcom could have rightly serious implications for companies such as Meta - under the Act companies can be fined up to £18 million or 10% of their qualifying worldwide revenue, whichever is greater.

In the UK Ofcom, which regulates this space, has said: 'from 17 March 2025, providers will need to take the safety measures set out in the Codes of Practice or use other effective measures to protect users from illegal content and activity.'

Even though Meta is not based in the UK the government’s Online Safety Act explainer makes it clear, as do the provisions of the Act, that they are enforceable where there are a significantly large number of users in the UK.

Draft Codes of Practice

Also of relevance here is the illegal content Codes of Practice for user-to-user services which is the recommended guidance to be adopted by service providers. It sets out for larger service providers their polices should be drafted in such a way that illegal content is not permitted. Clearly Meta here has failed to do so.

 

Read Meta's policy on hateful conduct here

 

Suneet Sharma

Topics