NEWS9 August 2021

Lack of moderators affecting anti-misinformation efforts

AI Covid-19 News Technology UK

UK – A reduction in the number of moderators at the onset of Covid-19 and increased reliance on algorithms caused problems in moderating social media effectively, according to a Centre for Data Ethics and Innovation (CDEI) report.


The report, called The role of AI in addressing misinformation on social media platforms, found that Covid-19 led to a reduction in the moderation workforce and a greater use of automated content decisions without significant human oversight, while misinformation increased.

The increased reliance on algorithms saw substantially more content being incorrectly identified as misinformation, according to the report.

The findings are based on an expert forum held by the CDEI last year including 14 stakeholders, including social media companies, fact-checking organisations, media groups and academics.

Among the issues the forum sought to understand were the role of algorithms in addressing misinformation, changes that occurred during the pandemic, the effectiveness of social media platforms’ approach to the issue and the extent to which greater transparency is required.

The report said that social media platforms have issued reassurances that increased reliance on algorithms is only temporary, and human moderators will be at the core of their moderation processes.

A lack of evidence is also hindering understanding of the effectiveness of measures to combat misinformation on social media, and clear guidance from government on the types of information companies should disclose is needed.

Platforms are increasing the amount of information about misinformation they disclose, but the report suggests they could go further on areas including content policies, content moderation processes, the role of algorithms in moderation and design choices, and the impact of content decisions.