Researchers used Reddit to conduct undisclosed AI-based study
Researchers from the University of Zurich used the ‘Change My View’ subreddit (r/ChangeMyView) on Reddit to deploy AI-generated comments to study how AI could be used to change people’s views.
Comments generated using AI and posted on the Reddit platform for the experiment included one that purported to be written by a victim of rape and another that appeared to come from a trauma counsellor specialising in abuse.
The researchers did not disclose that they were conducting the research project until months later when they contacted the subreddit’s moderation team. The moderators then announced the disclosure to the community towards the end of April.
In the announcement, the moderation team said it had received contact from the University of Zurich as "part of a disclosure step in the study".
The researchers used multiple accounts to post on the subreddit, with the experiment aiming to assess the persuasiveness of large language models (LLMs) in an ethical scenario.
In the disclosure to the moderation team, the researchers wrote: "In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible. While we did not write any comments ourselves, we manually reviewed each comment posted to ensure they were not harmful."
The experiment broke the rules of the subreddit, which prohibits AI-generated comments. In the disclosure, the researchers wrote: "We recognize that our experiment broke the community rules against AI-generated comments and apologize. We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules."
In response, the subreddit’s moderator team wrote to the University of Zurich to file an ethics complaint, citing multiple concerns about the impact to the community and what it called “serious gaps” in the ethics review process.
The researchers later apologised to users of the r/ChangeMyView community, writing in a post: "We did not intend to cause distress to the community and offer our full and deeply felt apology. The study was carried out in good faith, to better understand the persuasive potential of language models. However, the reactions of the community of disappointment and frustration have made us regret the discomfort that the study may have caused."
They also said they would not publish the results of the research and had permanently ended the use of the dataset generated from the experiment.
Reddit said it did not know about the project ahead of time and has removed all accounts and AI-generated content associated with the research, and is considering legal action, according to a post by the company’s chief legal officer Ben Lee.
In the post, added to the subreddit, Lee wrote: “What this University of Zurich team did is deeply wrong on both a moral and legal level. It violates academic research and human rights norms, and is prohibited by Reddit’s user agreement and rules, in addition to the subreddit rules.”
Lee added: “We are in the process of reaching out to the University of Zurich and this particular research team with formal legal demands.”
A Reddit spokesperson informed Research Live that the platform was able to identify most of the researchers’ accounts before the research experiment was disclosed.
The company uses internal safety teams to enforce its rules, which includes detecting inauthentic accounts. As a result of the incident, the spokesperson said Reddit had refined its automated tooling to allow it to improve how it detects similar issues in future.
A spokesperson from the university’s media relations team, Kurt Bodenmüller, told Research Live in an emailed statement that university authorities are aware of the incidents and will investigate them.
The university said the researchers behind the project have decided not to publish the results of the study of "their own accord". The university would not disclose the identity of the researchers or the department where the research originated, for privacy reasons.
Bodenmüller wrote: "In light of these events, the ethics committee of the Faculty of Arts and Social Sciences intends to adopt a stricter review process in the future and, in particular, to coordinate with the communities on the platforms prior to experimental studies."
The university said a request was submitted to the ethics committee of the Faculty of Arts and Social Sciences in April 2024 to review a research project investigating the potential of AI to reduce polarisation in "value-based political discourse". The project involved four studies, one of which involved the use of LLM-driven conversational agents in online forums and subreddits.
The ethics committee advised the researchers that the study was considered to be "exceptionally challenging" and that the chosen approach should be "better justified", the "participants should be informed as much as possible" and the research should fully comply with the platform’s rules. However, such recommendations are not legally binding, wrote Bodenmüller, and researchers themselves are responsible for carrying out projects.

We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.
0 Comments