NEWS19 November 2015
All MRS websites use cookies to help us improve our services. Any data collected is anonymised. If you continue using this site without accepting cookies you may experience some performance issues. Read about our cookies here.
NEWS19 November 2015
UK — Ipsos MORI and Demos have released a report urging an improvement in the ethical standards of social media research, amid public concern about how researchers are using social media data.
A new report #SocialEthics found low public awareness that information on social media can be mined for research compared with other uses of social media data such as to target advertising. So while just 38% of the public are aware their social media posts are potentially being analysed for research projects, 57% are aware of it being used for ad targeting and 54% that it can be used to personalise the content they see on that network.
Last year the Samaritans pulled its Radar app, which was designed to detect when people on Twitter appeared to be suicidal by analysing accounts for phrases such as ‘tired of being alone’, ‘depressed’ and ‘need someone to talk to’, after criticism including that it hadn’t taken into account people’s privacy sufficiently.
#SocialEthics is the culmination of a year-long exploration of social media research ethics and concludes with a series of recommendations to researchers, regulators and social media organisations on how they can raise awareness and improve ethical standards in this field.
Despite the fact the terms and conditions of the major social networks means data can currently be used by third parties for any number of reasons, including analysis for research, 60% said they didn’t think social media companies should be sharing their data with third parties for research purposes.
Public expectations about social media data privacy aren’t being met by current practice.
The report found that: a common sentiment was that people felt they had ‘lost control’ of how their data was being used; 74% would prefer to remain anonymous if a social media post was used in a research report; 54% agreed that all social media accounts have the right to anonymity in social media research; and almost a third ( 32%) still thought that social media companies should not disclose high level data, such as volume of posts on a particular subject, even if this information is not attributed to individuals.
The key factors which influenced whether people thought their social media data should be used for research projects included: whether the data is already publicly available; how much anonymity there was; and who commissioned it.
The report makes a series of recommendations on how research organisations and social media platforms can better safeguard social media users, including:
The report is part of the Wisdom of the Crowd project, involving Ipsos MORI, Demos, University of Sussex and CASM Consulting LLP, which looked at the feasibility of large-scale aggregated research using social media data. Results are based on an online quota survey of adults aged 16- to 75-years old. The survey consisted of 1,250 interviews conducted between 7-13 August 2015.
2 Comments
Annie Pettit, CRO Peanut Labs
9 years ago
There are lots of readily available resources for people who value the ethics of social media research. You can download guidelines from various organizations here. Esomar: https://www.esomar.org/knowledge-and-standards/codes-and-guidelines.php Casro: http://c.ymcdn.com/sites/www.casro.org/resource/resmgr/docs/social_media_research_guidel.pdf MRA: http://www.mra-net.org/rq/documents/MRA_IMRO_SMR16.pdf Also, ESOMAR/GRBN are, as we speak, updating their guidelines. Lastly, ISO 19731 Digital analytics and web analyses in market, opinion, and social research is being actively worked on by a global team of researchers and practitioners. This document focuses more on standards of quality as opposed to ethics. Although high quality research is indeed more likely to be ethical research.
Like Reply Report
NickD
9 years ago
Funny, isn't it: Twitter being the main social network that's actually "minable", users are consciously broadcasting opinions and thoughts to the wider world, but don't like it when the wider world actually looks at it and thinks about it in the context of other similar opinions from other users. More seriously though, the issue of consent for analysis in something like Twitter analysis is a significant headache - what's realistic for asking for consent, and who takes the responsibility?
Like Reply Report