NEWS19 February 2024

Data regulator tells platforms to consider information rights when moderating content

Features Media Privacy Technology UK

UK – The Information Commissioner’s Office has issued its first guidance on content moderation for platforms as part of an ongoing partnership with Ofcom.

blue graphic speech bubbles

The new guidance outlines how data protection law applies to online content moderation processes, which organisations – including social media platforms – use to analyse content generated by users.

Organisations use people’s personal information to moderate content, and the process can cause harm if incorrect decisions are made, the regulator said. For example, a platform could incorrectly identify a user’s content as illegal.

The guidance is aimed at organisations who are carrying out content moderation to comply with online safety law under the Online Safety Act 2023. It is also applies to those who are moderating content for other reasons.

Stephen Almond, executive director for regulatory risk at the ICO, said: “Content moderation decisions shape what we see and who we interact with online. It’s crucial that data protection is designed into these processes so that people have confidence in how their information is being used and can get redress if the wrong decisions are reached.” 

Gill Whitehead, Ofcom group director for online safety, said: “Effective content moderation will play a crucial role in creating a safer life online for people in the UK. Last year, Ofcom proposed how tech firms can protect their users from illegal content, and we’re working closely with the ICO to make sure companies protect people’s personal data in the process.” 

The guidance is part of the ICO’s ongoing work with Ofcom on data protection and online safety technologies. The ICO will update the guidance to reflect technological developments and Ofcom’s finalised online safety codes of practice.