FEATURE31 July 2017

It’s the way you ask it

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

Behavioural science Features Impact Public Sector

Cognitive biases can be triggered by questionnaire structure – by mistake or by design, as Crawford Hollingworth explores

Bryg

A frequent objective for all is measuring customer satisfaction. Without this information, it’s hard to know how, where and what to improve in a product or service. Yet it’s not always easy to gather customer feedback accurately. 

However, behavioural science can offer explanations for why a survey is often not as accurate or informative as we might hope. Selection bias, dishonesty and subconscious cognitive biases – such as framing and priming effects and reference points that skew responses – can all influence responses.

Behavioural science can also offer some useful guidelines for better measurement of customer satisfaction. In this article, we’ll look at four insights into the design of research and survey questions to make them more effective.

Questions that collect objective, behavioural information can increase accuracy

Survey questions are often open-ended and unstructured in their language to avoid priming a particular response from a customer. However, they leave respondents with considerable ‘wiggle room’ in how they answer, and can lead to skewed responses. 

Why? Because research into dishonesty has found that the majority of us cheat and lie a little bit. Asking general, broad questions often gives us leeway to be creative in our answers, or rationalise to ourselves why it’s OK to give the answer we do. 

Asking semi-structured questions about specific issues can make it harder for people to omit or gloss over problems or poor performance and has the potential to give more objective or specific information about outcomes or behaviours.

Similarly, if people need to select from a range of answers, it can help to make those more specific to reduce wiggle room. Dan Ariely, professor of behavioural economics at Duke University, North Carolina, US, says: “Take the question, ‘How often have you done X?’ If you create a scale from ‘very rarely’ to ‘very frequently’, that’s not as useful as offering, ‘2 times last week’ or ‘3 times last week’. It is important to use categories that people can accurately quantify. The more concrete your response scale is, the more likely people are to answer your questions in an informative way – and answer accurately and honestly.”

How we frame a question can influence our response

We can ask a question in many different ways: in the language we use; how we structure the question; and, in particular, by what we make salient; but, this may elicit different responses. Behavioural science has shown our decision-making can be influenced by how information is framed or presented to us. A simple example is how we judge the quality of meat labelled ‘90% lean’ versus ‘10% fat’. Most of us will be drawn to the ‘90% lean’ frame, conscious of healthy eating advice.

This insight can extend to how questions are framed, too. What element does the question most bring to mind? To what extent does the way a question is worded suggest a status quo or a situation that implies most others are happy with it?

In the run up to the 2008 US presidential election, pitting Barack Obama against John McCain, the firm behind the NBC News/Wall Street Journal poll included questions that demonstrate how powerfully framing effects can affect our perception and choices.

When people were asked who ‘would be the riskier choice for president – John McCain or Barack Obama’, the results were 35% for McCain and 55% for Obama. 

Yet when asked who ‘would be the safer choice for president’, the results were 46% for McCain and 41% for Obama. More people than before now perceived Obama as a safer choice. The percentages should be mirror opposites, but a clear framing effect skews them.

Other issues of question framing also arose in the referendums for Scottish independence and the UK’s alternative vote. 

Spending time reflecting on how best to frame the question to which you need an answer is essential for gathering unbiased responses.

Continuous rating scales can be more sensitive than discrete scales

Customer feedback and ranks need categorising, so we often ask customers to rate a service using a discrete scale – known as a Likert scale – ranging, perhaps, from very poor; poor, satisfactory; good, to excellent. 

Behavioural science suggests that discrete rating scales like this can influence the accuracy of people’s responses and lead them to avoid the extremes of the scale, so that most responses cluster in the middle area. Again, Ariely explains: “Often people use a five-point response scale, but we find most people have an aversion to the extremes. This means that when we use a five-point scale, we are effectively using a three-point scale. A continuous scale with just two anchors in the extremes in such cases is ideal.”

It can often make more sense to use a continuous scale (a visual analogue scale) with no defined ‘points’ because they can appear to offer a greater range and sensitivity to responses.

A study measuring people’s happiness, conducted by Raphael Studer and Rainer Winkelmann, economists at the University of Zurich, used a randomised trial to tease out how responses differ between the two scales. More than 5,000 Dutch households participated in the surveys.

Half of respondents were asked to report their happiness on a Likert scale and then – a month later – on a visual analogue scale (VAS). Others were given the VAS first and the Likert scale a month afterwards.

Although responses were broadly similar across the two types, there were notable differences too. Responses to the Likert scale tended to cluster around seven and eight (out of 10 ) – positive but avoiding extremes. In comparison, the VAS attracted a greater variety of responses and were more spread out; although the usual findings came up  – the middle-aged were less happy; the married were happier – the size of each effect appeared greater on the VAS.

One question may prime the answer to a later question

We know the information we need to gather, but what order should we put it in? Does question order affect responses, and, if so, how?

Researchers have found that responses to a questionnaire may be biased by what is most available in people’s memory of the interaction, rather than based on balanced recall. Whatever is ‘front of mind’ when answering a question may bias people’s response. 

Let’s look at research conducted into the impact of the conflict resolution organisation Seeds of Peace. It brings together Jewish Israeli and Palestinian teenagers in a US camp to increase awareness of the two groups and build lasting friendships.

Border walls between Israel and Palestine mean an entire generation is growing up not knowing the other group. Yet, at the camp, two in three Seeds of Peace campers form at least one friendship with their respective outgroup while at camp, which typically leads them to feel more positive towards Israelis or Palestinians as a whole. 

Researchers designed a survey to measure the impact of the camp on positive feelings and increased openness toward Jewish Israelis/Palestinians. They tested the impact of question order on ratings of positivity toward the outgroup. Asking campers if they had made an outgroup friendship first, before asking them to rate positive feelings toward the outgroup in general, led to higher positivity ratings. 

Conversely, initial questions that create a negative frame of mind can skew answers in the other direction for later questions. 

Gallup used two different question orders in the Gallup Healthways Wellbeing Index – a daily telephone poll of 1,000 randomly sampled Americans to assess subjective wellbeing. In 2009 to 2010 – an era where many were unhappy with the political situation – first asking the questions about political satisfaction dramatically reduced reported life satisfaction in subsequent questions. 

Nobel Prize-winning economist Angus Deaton says: “People appear to dislike politicians so much that prompting them to think about them has a very large downward effect on their assessment of their own lives; over the 111th Congress ( 2009–2010 ), only 25% of the population approved of Congress. The effect of asking the political questions on wellbeing is only a little less than the effect of someone becoming unemployed; to get the same effect on average wellbeing, three-quarters of the population would have to lose their jobs.”

The four insights discussed above can give us a new understanding and more rigorous framework for thinking about how we design questionnaires to elicit the most accurate customer insight or opinion. On the other hand, it could also give insight into how some may be manipulating insight and opinion by design. 

Crawford Hollingworth is co-founder of The Behavioural Architects 

0 Comments