NEWS26 October 2021
All MRS websites use cookies to help us improve our services. Any data collected is anonymised. If you continue using this site without accepting cookies you may experience some performance issues. Read about our cookies here.
NEWS26 October 2021
UK – Facebook’s algorithms are making hate speech and violence on the platform worse and regulation is needed to address the issue, Facebook whistleblower Frances Haugen has told MPs.
Appearing in a joint committee discussing the draft online safety bill, Haugen said that engagement-based rankings on Facebook were prioritising hate speech and groups on the platform were allowing extreme content to be easily disseminated.
Engagement-based ranking is used in social media to determine which content the platform believes is most relevant to its users and promote that content on people’s newsfeeds.
However, Haugen said that a side-effect of this system was that popular content that promoted extremist views or hate speech was being promoted, and that artificial intelligence (AI) systems designed to block this content were only stopping a fraction from being published.
“The algorithms take people who have very mainstream interests, and they push them towards extreme interests – you can be someone centre-left, and you get pushed to extreme left; you can be centre-right, and you get pushed to radical right; you can be looking for healthy recipes, and you’ll get pushed towards anorexia content,” Haugen said.
“When that context is hate, you see a normalisation of hate and a normalisation of dehumanising others, and that’s what leads to violent incidents.”
Haugen added: “Facebook never set out to prioritise polarised and divisive content, it just happened to be a side-effect of choices they did make.
“Part of why I came forward is that I am extremely worried about the condition of our societies, the interaction of the choices Facebook has made and how it plays out more broadly.”
There was a necessity for greater regulation of Facebook, Haugen added, but she warned the company was trying to avoid increased oversight.
“When we see something like an oil spill, that oil spill doesn’t make it harder for society to regulate oil companies,” Haugen said. “But right now, the failures of Facebook are making it harder to regulate Facebook.”
She added: “Right now Facebook is closing the door on us being able to act. We have a slight window of time to regain people control over AI – we have to take advantage of this moment.”
Haugen also said that she felt that Instagram is “more dangerous that other forms of social media” for children and young people as it is about “it is about social comparison and about bodies”.
“It is about people’s lifestyles, and that’s what ends up being worse for kids,” she said.
“I am deeply worried that it may not be possible to make Instagram safe for a 14-year-old, and I sincerely doubt it is possible to make it safe for a 10-year-old.”
Haugen’s comments follow reports that Facebook’s own internal research found that Instagram was harmful to teenagers.
Last month, the company paused development of a version of Instagram aimed at children following media coverage of the internal research and subsequent political pressure.
Non-English languages were particularly at risk from extremist content, Haugen said, with many languages getting a “fraction” of the safeguards in place for US English, and she said that recent upheaval in countries such as Ethiopia showed the potential for social media to fuel societal conflict.
Facebook has been approached for comment.
0 Comments