FEATURE10 January 2014

Selection boxes, slurred words and satisfaction

Opinion

Alongside the Christmas debate on whether to have cheese before or after dessert, the question of when to measure overall satisfaction is contentious. DJS Research’s Alex McCluckie (aided by his uncle Don) offers his view.

It’s Christmas Day and the family are gathered around. Presents have been opened and selection boxes are being devoured. Across the room I see my uncle Don lean over and ask my sister: “So, how’s Christmas treating ya?”

Now, there is certainly more to the festive season than the euphoria of receiving a new Google Nexus tablet or the panic of covering the ‘Oh God, not again’ expression that comes with opening yet another pair of socks. Indeed, Christmas enjoyment encompasses many things – the opening of the presents, the blissful smell of turkey and the great company, to name just three.

But fast-forward 12 hours and several bottles of wine, and once again, I see my uncle, with a decidedly drunken look on his face, lean over and ask my sister for the second time: “So, how’s Christmas treating ya?” Uncle Don is renowned throughout the McCluckie clan for his late stays at family gatherings – remaining long after the majority of the party has headed off or desperately want to go to bed. It was during this annual occurrence that I began thinking about question positioning and context, and more specifically the old debate on overall satisfaction. How’s that for some Christmas musings?

Perfect positioning

The discussion around where the overall satisfaction question should be placed within, say, a customer satisfaction questionnaire is not a new one. Indeed an old discussion post by Ray Poynter has summed up the matter by stating that: “…the best place for the question is near the start, if ‘real’ views are to be gained. If considered, rational, views are important for a project, then it might make sense to ask it at the end of the questionnaire.”

On the face of it this makes sound sense: people need to consider each of the areas of service in order to gain a complete picture of how a service provider is performing before they can give an informed response. However, research suggests that considered, rational views may not in fact turn out to be the views that were asked about.

Attribute substitution

Let me explain further. It would come as a shock to no one to hear that certain tasks are more mentally taxing than others. What may surprise people is to learn of a certain mechanism that people turn to (albeit unconsciously) in order to ease the cognitive load.

According to Daniel Kahneman and Shane Frederick, when answering a question that is too mentally taxing, respondents can unknowingly substitute the mentally-taxing question for another that is easier to answer – a phenomenon called attribute substitution.

To quote the aforementioned authors, if an evaluation is a difficult one, “…the target attribute does not come to mind immediately, but the search for it…activates the value of other attributes that are conceptually and associatively related”.

“If respondents find giving an overall rating to be too difficult a question to answer they may inadvertently substitute the question for an earlier one (say, about a single attribute) that they found easier to answer.”

This mental trickery is more than mere conjecture. A study, cited by Kahneman and Frederick, found that when college students were asked the question “How happy are you with your life in general?” followed by the question “How many dates did you have last month?” the correlation between responses was 0.12. When the dating question was asked first however, a correlation between the two questions of 0.66 was observed.

This correlation led the authors to suggest that “…thinking about the dating question automatically evokes an affectively charged evaluation of one’s satisfaction in that domain of life, which lingers to become the heuristic attribute when the happiness question is subsequently encountered”.

The substitution that took place in this example demonstrates how, when a concept is already present in your working memory, it is said to be accessible, and as a result, other related concepts become more accessible.

Survey design implications

By designing a customer satisfaction survey where the overall satisfaction question comes at the end, we are making the evaluation highly accessible – which superficially may appear to be no bad thing. As we’ve seen, after going through all the individual attributes, you could argue that each attribute may be equally accessible by the time the overall satisfaction question comes around thus achieving a good overall picture of respondent satisfaction.

However, if respondents find giving an overall rating to be too difficult a question to answer they may inadvertently substitute the question for an earlier one (say, about a single attribute) that they found easier to answer, which may lead to the introduction of bias in our data.

In our role as choice architects then, such findings are important and may well be further evidence for asking the overall satisfaction question at the beginning of our surveys as opposed to the end in order to gain, as Poynter calls them, ‘real views’. Otherwise, like old uncle Don who began this story, as the party/survey rolls on, the different facets of the experience may disproportionately colour our views until we are unable to give a true overall picture anymore.

Alex McCluckie is a senior research executive at DJS Research.

Reference:

Kahneman, D. and Frederick, S. ( 2002 ). Representativeness Revisited Attribute Substitution and Intuitive Judgement. In: Gilovich, T (Ed); Griffin, D (Ed); Kahneman, D (Ed). Heuristics and biases: The psychology of intuitive judgment, (pp. 49-81 ). New York, NY, US: Cambridge University Press

2 Comments

6 years ago

Having debated this question with several peers and colleagues, most of whom have argued for the beginning rather than the end (end favored by methodologists because the model explains more variance - like your .66 correlation above. However, it may well depend on consumers and the product categories. In most cases, consumers tend to make that decision on the basis of 3-5 key determinants (for example, price, gas mileage, reliability etc. ), and this shapes the overall sat decision. Marching them through a battery of 30+ attributes only serves to muddle things - no consumer will ever consider all 30 attributes in even complex purchase scenarios.

Like Report

6 years ago

It does indeed depend on the market. There are many markets and services that consumers take for granted. It can be very useful to know how consumers answer an overall satisfaction question in the real life context of never actually considering the service. If most people are 'very satisfied' till they are asked to consider the the issue in detail it would be VERY important to look at the minority who declare themselves, in exactly the same context, to be dissatisfied. Horses for courses.

Like Report