OPINION2 July 2014

Is Facebook the only emotional manipulator?


The ethics of Facebook’s news feed research has been questioned but is the market research industry as a whole guilty of playing with respondents emotions asks Annie Pettit


While most people’s feeds are curated based on which friends you like, share, and comment on more often, the feeds of these people were curated in addition, by considering the positive and negative words they included. In both conditions, Facebook chose which of your friends’ posts you would see though in the Test condition, you might be offered a greater proportion of their positive or negative posts. The conclusion was that you can indeed affect people’s emotions based on what they read. You can read the published study here.

I honestly don’t know where I stand on the ethics of this study right now. Ethics interest me but I’m not an ethicist. So instead, let me think about this from a scientific point of view.

Do you deliberately manipulate emotions in the work you do? As a marketing researcher, your job is ONLY to manipulate emotions. You know very well that this brand of cola or that brand of chips or the other brand of clothing cannot boast better taste, feel, look, or workmanship. All of those features are in the eye, or taste buds, of the beholder. Through careful research, we seek to learn what makes different kinds of people happy about certain products so that marketers can tout the benefits of their products. But, at the same time, we also seek to learn what disappoints and makes people unhappy about the products and services they use such that those weaknesses can be exploited by marketers.

Through a strange twist of fate, a colleague and I recently conducted a tiny study. We found the results quite interesting, wrote a quick blog post about it, and scheduled to post it today. As Facebook did on a larger scale, I will confess that I manipulated the emotions of about 300 people.

I saw on a number of studies that age breaks are inconsistent. Sometimes researchers create an 18 to 34 age break, and other times they create an 18 to 35 age break. In other words, sometimes you’re the youngest person in a group, and sometimes you’re the oldest person in a group. Would you rather be the oldest person in a young group, or the youngest person in an old group? What did we find? Well, people did indeed express greater happiness when they were part of the younger group, even though they were the oldest person in that group. I deliberately and knowingly manipulated happiness. Just like Facebook did. Do you hate me now? Do you think I’m unethical? You can read the post here.

As marketing researchers, every bit of research we do, every interaction we have with people, is intended to manipulate emotions. We collect data that marketers use to criticise our favourite products. We collect data so that politicians can directly criticise other politicians through their negative ad campaigns. Has that bothered you yet? Has that bothered you enough to warrant outcries in social media? Have you campaigned for an immediate ban of television, radio, and viewing products on the shelves at supermarkets knowing that those things are intended to manipulate our emotions?

Since you know that your research is intended to affect emotions, do you inform your research participants about the potential negative consequences of participating in your research? Do you tell them that seeing their age in the older age bracket may make them unhappy, that viewing critical ads may make them unhappy, that being asked to select up to five negative attributes might make them unhappy?

Given that we’ve done it this way for so long, have we become complacent about the ethics of the research we conduct? In this age of big data, is it time to take a fresh look at the ethics of marketing research?

Annie Pettit is the chief research officer at Peanut Labs and vice-president, research standards at Research Now. She tweets at @LoveStats and is the author of The Listen Lady, a novel about social media research.


8 years ago

The post I mentioned is now live: http://www.research-live.com/opinion/is-facebook-the-only-emotional-manipulator?/4011914.article

Like Report

8 years ago

I agree with what you say you Annie from a research perspective.......which bring us back to the fact that almost three quarters of a million people were taking part in research without knowing it; and that one of the 'backers' was the US military.....

Like Report

8 years ago

I agree that a lot of research is exploring how to create a desired emotional response to the product, service or advertising. However, isn't the difference here that, unlike in 'traditional' research, Facebook can directly measure, target & influence individual members emotions?

Like Report

8 years ago

What keeps stumping me is that our own friends wrote all the positive and negative messages and intended for us to read them and sympathize. Our friends intended us to read "I hate my landlord" and hoped that we would reply with "You should hate your landlord." Facebook did not create nor edit what our friends wrote. What if the study was prefaced as "In order to determine whether we could help people feel better, we ensured that their negative posts were more likely to be seen such that they could receive moral support from their friends." Does that make the study sound better? In a similar vein, I got a new TV recently. It's got a ridiculous amount of behind the scenes programming that I don't understand. But, it knows which shows are played more or less often on my TV. If I watch a lot of scary, depressing shows, it offers me even more scary, depressing shows. Ads have already gone that route. Based on what I do online, I'm shown ads exactly tuned in to me. And now with all the wearables, just imagine the ads we'll see soon - blood pressure meds, exercise programs - precisely designed for me. --Still puzzled, still playing Devil's advocate

Like Report