OPINION1 September 2010

Let’s let respondents decide what matters

Opinion

Roger Sant, vice president at Maritz Research, says we need to start thinking of respondents as customers – and give them a little more respect.

Companies put a lot of effort into improving the experiences of their customers, but this objective seems to be forgotten when we ask for their feedback. The very people who are responsible for providing the insights that steer customer improvement initiatives seldom deliver an enjoyable survey experience.

Customers are becoming more and more frustrated at being asked to fill out long, boring and often irrelevant questionnaires. Surveys are not giving anything back; people quite reasonably want to know how their data is being used and what they are going to get in return. Furthermore, customers want to tell us about the things they think are important, not the things we want them to talk about. And given the explosion of the internet, they can do this via any number of chat rooms, review sites and social networks in a shared and open manner; so why should people bother to fill out market research questionnaires which do not engage them and give them little or nothing in return?

As a result, people generally find surveys annoying, and as such they are becoming less willing to participate. As we all know, response rates are dropping and if we don’t address this soon, our sample data will become unrepresentative of our target market. Furthermore if people are not engaged with the survey they will not put an adequate amount of thought into answering each question properly – they will simply want to get through the ordeal as quickly as possible.

“By focusing on memorable moments, the things that either added to or detracted from someone’s overall experience, we can make surveys more like a natural conversation and less like an interrogation”

This makes it difficult to work out what aspects of the experience really matter, and historically we have used bigger and bigger statistical hammers to address this. What we have not done is address the root cause of the problem: long and boring questionnaires that don’t encourage thoughtful responses. Unless customers are allowed to differentiate between the things that did and didn’t make a difference to their overall experience, no amount of analysis can untangle the data and identify the real drivers of loyalty and advocacy.

If we do not start to do a better job at engaging people in the feedback process, the data we collect will become less and less meaningful. This has big implications; if market researchers are to become true gatekeepers of the wide variety of information coming into organisations from different sources, they need to start to facilitate conversations, rather than attempt to control a one-way flow of information.

A better approach
If we allow people to talk about the things they want to tell us, we should also let them use their own words to express this. By focusing on memorable moments, the things that either added to or detracted from someone’s overall experience, we can make surveys more like a natural conversation and less like an interrogation.

There are a number of ways to do this. For example we can make much better use of open-ended questions. The top-level metrics that go into the dashboard (overall satisfaction, likelihood to recommend) can still be maintained, but these can be followed up with open questions that elicit the memorably good or bad moments relating to an experience. This can be done at an overall level (‘What was particularly good or bad about your most recent trip with British Airways?’) or separately for each major part of the customer journey (e.g. check-in, boarding and flight). Furthermore, advances in text analysis software help us to evaluate this qualitative feedback.

Another approach is to become much more flexible about the way we ask questions in surveys. Questionnaires need to be able to adapt to focus on the things that are most relevant to our customers. We have all completed surveys that have asked us to rate 10 or 15 attributes relating to an occurrence about which we remember very little, if anything at all. If you go to pay in a cheque at your bank, are you really likely to remember whether the cashier was friendly or whether the bank was clean and tidy? Unless something unusual happens, the chances are you’ll only remember that you went in, paid in the cheque, then left. Flexible or adaptive questioning enables us to focus the survey on things that actually made a difference. If something was average or not at all memorable, we should stop trying to extract the detail and move on.

This may lead to ‘missing data’, but by forcing people to rate things they have little or no opinion about we are getting misleading data anyway. This is when questionnaires get boring, customers get frustrated and we end up with people who don’t want to complete questionnaires in the future.

In a recent self-funded study we asked respondents about attributes that had had a particularly positive or negative effect on their overall experience. During the analysis phase, we established a significant improvement in the ability of our statistical model to explain variance in the data, leading to stronger predictions and a better understanding of the drivers of satisfaction.

In another recent study for a major international client, we applied a number of the techniques mentioned above and also asked some survey evaluation questions at the end to see what customer thought of the new survey. We found that having a focus on open-ended questions and an ability to skip the detail on issues that were not memorably good or bad, led to more positive perceptions of the survey than with the traditional customer satisfaction questionnaire. Significantly higher scores were achieved on evaluations such as ‘reasonable in length’, ‘reflects well on the brand’ and ‘willing to take this survey again in the future’.

If the industry carries on as it is, we’re going to end up with unrepresentative samples and less thoughtful responses, which will lead to a lack of direction in the findings. It’s a lose-lose situation. It would be better for all concerned to re-engage our customers in the feedback process. We need to think of respondents as customers and apply some of those frequently quoted ‘company values’ – trust, openness, honesty, respect – to the way we conduct surveys. To do this, we need to focus more on the things that matter to them, the things that people actually remember and care about.

7 Comments

14 years ago

These are great points, and I will point other people to this article. The one area I disagree with is the notion that respondents find open-ended questions easier -- a few open-ended questions are good, appropriate and are answered by most respondents. As the number of open-ended questions increases, though, we've seen response rates suffer. They are tiresome in quantity, especially for respondents who don't like to answer open ends. Simple closed-ended questions are more convenient for respondents.

Like Report

14 years ago

The headline is perhaps a little misleading as it suggests Maritz has just begun thinking about respondent engagement. The article indicates they have been doing so for some time but what is surprising is that such an approach (treating respondents well in their survey contacts with us) is news. The evidence has been clear for years - treat respondents well and tey will continue to respond. Treat them badly - with boring surveys, badly designed questionnaires, pointless questions or compulsory questions, or questions that have no relevant answers, and potential respondents (including professional marketing researchers who do other researchers' surveys) do get peeved. But it is the same in any customer service business (taxation perhaps excluded). That Maritz is writing about this is good, but what would be more useful would be if professional Market Research Societies really hammered this point home so the message actually reaches both those who design surveys and those who commission them. One of the strong benefits of online surveys, compared with other methods, is that we can now ask as many questions as the project needs, but do so smartly. That is, we can write the complete questionnaire we need, and if it is too long, divide it as many times as necessary to make each survey shorter and customer-respondent-friendly. We can similarly divide the sample, and so treat respondents with courtesy and respect yet still obtain all the information a client needs. All we need to do is inculcate this customer orientation into researchers and on (online) we go.

Like Report

14 years ago

Great points. This is one of the reasons I prefer to do in-depth interviewing whenever feasible. The interview guide allows flexibility to skip lightly over irrelevant topics but to zoom in and explore elements about which the respondent simply has more to say (that is meaningful). Obviously they're not always an option but IDIs do address the respondent experience very well. Andy Perkins www.SatisfactionQuestionnaire.com

Like Report

14 years ago

Responding to Jeffrey's comment - this very much depends upon the methodology being used. For instance, within a telephone survey a large proportion of open-ended questions is much better for the customer taking part. They are able to have a far more conversational experience, talking about the things that mattered the most to them in their own words - a far cry from the more process-driven interrogations some surveys can become. With online surveys there is scope to mix the approach, using a balance of open-ended and closed questions. We feel it's important not to assume that convenience of response equates to good data. Closed questions forces a response that may not best reflect the customer's view. I agree that ALL open-questions throughout an online survey could become tiresome. I think the idea is to keep things varied so a customer remains engaged with the survey. Never provide too long a section of similar questions, open or closed, as this encourages poorer, less considered responses. It then depends on the length of the survey and the balance of topics throughout.

Like Report

14 years ago

Thanks for saying what nobody wants to say!. In Argentina, it is often still applied questionnaires to 45 minutes and 60 minutes, door to door surveys. Abusing the generosity of respondents is very dangerous, is a boomerang! Companies that buy market research should be very careful with this, and not accept questionnaires that could irritate their customers. In my opinion, a market research, quality begins with a very well written questionnaire, which has been proven that it focuses on what is important and therefore should not be over 20 or 25 minutes. I completely agree to distribute the questions in two questionnaires, whether the objectives of a market study required too many questions. The problem is that, at least in market research, quality is not consistent with low prices. If companies continue to buy market research choosing by price and not quality, will continue to increase the number of people refusing to answer a survey.

Like Report

14 years ago

To respond to Philip Derham’s comments; we all know and have done for a time, that boring respondents to death is not a wise idea. At Maritz we have been working for some time on ensuring MR contributes to a positive customer experience, and have as a result funded several studies to show how best to do this. The reason we were asked by Research to contribute this piece now was because of the new research Roger cites in the article that clearly demonstrates the benefit of designing shorter, more engaging questionnaires not only from the point of view of response rates and data quality, but also because it contributes positively to a customer’s overall brand experience.

Like Report

14 years ago

I think one of the problems is that when we come into work we all forget that we are consumers. This is not unique to the market research industry. I have given many presentations to senior management who never experience life as one of their own consumers and rarely give any thought to what is happening and what needs to be done. I often have to take them to examples of customer service that they are familiar with and ask them what they would do if they received the service their organisation was offering to ram home the point. Perhaps all market researchers should be required to complete at least 10 surveys a month

Like Report