OPINION1 September 2010

Let’s let respondents decide what matters

Opinion

Roger Sant, vice president at Maritz Research, says we need to start thinking of respondents as customers – and give them a little more respect.

Companies put a lot of effort into improving the experiences of their customers, but this objective seems to be forgotten when we ask for their feedback. The very people who are responsible for providing the insights that steer customer improvement initiatives seldom deliver an enjoyable survey experience.

Customers are becoming more and more frustrated at being asked to fill out long, boring and often irrelevant questionnaires. Surveys are not giving anything back; people quite reasonably want to know how their data is being used and what they are going to get in return. Furthermore, customers want to tell us about the things they think are important, not the things we want them to talk about. And given the explosion of the internet, they can do this via any number of chat rooms, review sites and social networks in a shared and open manner; so why should people bother to fill out market research questionnaires which do not engage them and give them little or nothing in return?

As a result, people generally find surveys annoying, and as such they are becoming less willing to participate. As we all know, response rates are dropping and if we don’t address this soon, our sample data will become unrepresentative of our target market. Furthermore if people are not engaged with the survey they will not put an adequate amount of thought into answering each question properly – they will simply want to get through the ordeal as quickly as possible.

“By focusing on memorable moments, the things that either added to or detracted from someone’s overall experience, we can make surveys more like a natural conversation and less like an interrogation”

This makes it difficult to work out what aspects of the experience really matter, and historically we have used bigger and bigger statistical hammers to address this. What we have not done is address the root cause of the problem: long and boring questionnaires that don’t encourage thoughtful responses. Unless customers are allowed to differentiate between the things that did and didn’t make a difference to their overall experience, no amount of analysis can untangle the data and identify the real drivers of loyalty and advocacy.

If we do not start to do a better job at engaging people in the feedback process, the data we collect will become less and less meaningful. This has big implications; if market researchers are to become true gatekeepers of the wide variety of information coming into organisations from different sources, they need to start to facilitate conversations, rather than attempt to control a one-way flow of information.

A better approach
If we allow people to talk about the things they want to tell us, we should also let them use their own words to express this. By focusing on memorable moments, the things that either added to or detracted from someone’s overall experience, we can make surveys more like a natural conversation and less like an interrogation.

There are a number of ways to do this. For example we can make much better use of open-ended questions. The top-level metrics that go into the dashboard (overall satisfaction, likelihood to recommend) can still be maintained, but these can be followed up with open questions that elicit the memorably good or bad moments relating to an experience. This can be done at an overall level (‘What was particularly good or bad about your most recent trip with British Airways?’) or separately for each major part of the customer journey (e.g. check-in, boarding and flight). Furthermore, advances in text analysis software help us to evaluate this qualitative feedback.

Another approach is to become much more flexible about the way we ask questions in surveys. Questionnaires need to be able to adapt to focus on the things that are most relevant to our customers. We have all completed surveys that have asked us to rate 10 or 15 attributes relating to an occurrence about which we remember very little, if anything at all. If you go to pay in a cheque at your bank, are you really likely to remember whether the cashier was friendly or whether the bank was clean and tidy? Unless something unusual happens, the chances are you’ll only remember that you went in, paid in the cheque, then left. Flexible or adaptive questioning enables us to focus the survey on things that actually made a difference. If something was average or not at all memorable, we should stop trying to extract the detail and move on.

This may lead to ‘missing data’, but by forcing people to rate things they have little or no opinion about we are getting misleading data anyway. This is when questionnaires get boring, customers get frustrated and we end up with people who don’t want to complete questionnaires in the future.

In a recent self-funded study we asked respondents about attributes that had had a particularly positive or negative effect on their overall experience. During the analysis phase, we established a significant improvement in the ability of our statistical model to explain variance in the data, leading to stronger predictions and a better understanding of the drivers of satisfaction.

In another recent study for a major international client, we applied a number of the techniques mentioned above and also asked some survey evaluation questions at the end to see what customer thought of the new survey. We found that having a focus on open-ended questions and an ability to skip the detail on issues that were not memorably good or bad, led to more positive perceptions of the survey than with the traditional customer satisfaction questionnaire. Significantly higher scores were achieved on evaluations such as ‘reasonable in length’, ‘reflects well on the brand’ and ‘willing to take this survey again in the future’.

If the industry carries on as it is, we’re going to end up with unrepresentative samples and less thoughtful responses, which will lead to a lack of direction in the findings. It’s a lose-lose situation. It would be better for all concerned to re-engage our customers in the feedback process. We need to think of respondents as customers and apply some of those frequently quoted ‘company values’ – trust, openness, honesty, respect – to the way we conduct surveys. To do this, we need to focus more on the things that matter to them, the things that people actually remember and care about.