OPINION11 August 2010

Online surveys don't have to turn respondents off

Opinion

In the July issue of Research, Robert Bain wrote that the market research industry still has work to do to make online surveys appeal to respondents, after spending a month as a fake respondent. That’s not news to Deborah Sleep from Engage Research, which has conducted a study of its own into how to make online surveys more appealing and more effective.

How many times a day, on average, do you receive an email inviting you to participate in an online survey? And of those you actually go online to fill in, how many do you stick with through to the end and how many do you bail out of halfway, because it’s so dull you realise you’ll simply never get that fifteen minutes back again?

Me too.

Therein lies a major problem for marketers looking to gather opinion using online surveys. The internet is a double edged sword. Online surveys remain an important, versatile and affordable way to find out what people think, but if brands and agencies are serious about engaging consumers in this way, their approach to the process needs to be significantly more creative.

Research conducted by Engage in collaboration with GMI showed that bored respondents are rushing through online questionnaires, answering in patterns just to get through the survey quickly and ultimately providing far less usable data.

We believe – and the research supports this – that response rates to online surveys can be significantly improved by changes to the structure of questionnaires, through innovative question mechanics, the use of visuals and animation and the deployment of imaginative techniques such as role play. Our findings indicate that, with these changes, completion rates can be improved by more than 20%, together with a significant growth in data volume and quality. By doing something as fundamental as grabbing the attention of a respondent from the start of the survey with an animated introduction, for instance, consumers will spend up to 80% more time answering follow-up questions.

Employing various role-play techniques to motivate respondents was found to increase evaluation time by 70%, double the number of words used in responses and significantly increase the number and quality of ideas and suggestions. One example which worked extremely well was to ask consumers to evaluate the impact of advertising from the perspective of someone working for an advertising agency. Another was to ask them to place themselves in the role of a boss of a company launching a new product. Both gave them a more interesting context in which to answer questions and the results improved accordingly.

For questions that ask for consumers to compose a written answer rather than ticking boxes in a grid, showing examples of how previous respondents had completed an online question revealed that the volume of responses could double, with word count jumping from 25 to 50 words. That can have a major impact on what the survey reveals.

If there are a significant number of questions to answer, it also helps to include a brief, light-hearted departure from the survey, which provides relief and prevents respondents from becoming bored.

Grid questions are a simple way to gather opinion – the problem is, respondents find them utterly boring, causing 80% more drop-out than other question types and being rated in our research at a lowly 3/10 for respondent enjoyment. Respondents try to speed through grid questions and the effect of that is their answers flatline.

Creating more interesting designs and layouts makes the process far more engaging, and radically improves interest and enjoyment (from a 3/10 score to 8/10 ), generating significantly more granular data.

Question wording also requires careful consideration. Many online surveys stick to the conventions used in face-to-face interviewing, making the whole process of reading instructions cumbersome. It’s a hangover from face-to-face interviews, where respondents do not have a visual scale. Long sets of instructions have no measurable influence on the balance of answers to a question but, more importantly, the inclusion of a whole load of unnecessary instructions acts like the boy who cried wolf. If someone reads an unnecessary instruction, it discourages them from bothering to read others – which could be more important. How significant is this? Up to 50% of respondents don’t read instructions properly at the end of a typical survey.

Social influence techniques can also be used to great effect in setting an expectation. Used carefully, an instruction to “spend no more than …” can multiply the time a respondent spends answering a question by a factor of five. In some instances, it may be worth making a question optional. Prefacing with the phrase “Would you mind” was found to be a successful means of improving the volume and quality of data and, in our experiments, less than 10% elected to skip the question.

Our approach to online surveying needs to change. It is still being too heavily influenced by the conventions of face-to-face interviewing where there are huge set-up and organisational constraints. Many of these rules can and need to be re-thought when conducting online surveys in order to avoid an ongoing decline in response and quality.

Online consumer research can reach far greater numbers far more quickly and less expensively, and can provide the intelligence that will influence everything from product design to price, from channels to market through to marketing messages. But extracting the information that is going to be most valuable requires more thought and engagement than ever before.

The lesson marketers must heed is that their money could yield a far better return if they overhaul their approach to engaging the interest of consumers.

13 Comments

14 years ago

I think this part of Deborah's article, where she talks about grid questions is very interesting : "respondents find them utterly boring, causing 80% more drop-out than other question types" It's great to see this issue confirmed by a study. Would it be possible for the author to give some details on the size and scope of the study that produced these results?

Like Report

14 years ago

I decided to sign up for a couple of online access panels to see what the latest is out there, and it's worse than I imagined. I was sent a link to a survey that was 2 hours long. Not only that, almost all of the questions were grids. Yes, two hours of grid questions!

Like Report

14 years ago

This is great common sense and researchers have been well aware of it for many years. We know it's a problem and we know how to counteract the problem. And yet we don't. We simply don't. I'd like to see some opinions on how to encourage clients to agree to making some of these changes and some stories from researchers on how they encouraged clients to accept some of these changes. Right now, there's a lot of talk and little action.

Like Report

14 years ago

The data Steve was asking about came from a frame by frame evaluation of 550 surveys conducted by GMI – covering a mix of subject matter, target samples, interview lengths, … etc. We examined the drop out by type of question and this gave us the 80% figure.

Like Report

14 years ago

Are you surprised? Sample only companies will pile it high and sell it cheap - I'm therefore not surprised that you were able to deduce such figures from GMI's surveys (sample only firms will aways bend if the client is insistent - fact). You should look at the figures that some full service/online companies (reputable) get from their surveys. If you look at figures for GMI/Toluna et al - they'll still be posting similar figures in 2 or 3 years (because they won't change but times are already changing in better quality research houses).... no doubt this article will be rehashed and written up using data from SSI/Toluna in 2 years time - nothing new here.

Like Report

14 years ago

One approach which I believe has been considered is for the online panel providers, who find it difficult to control the content of the surveys which their clients prepare, to offer a discount on surveys that receive a high satisfaction score from the survey participants, or conversely charge a premium for low satisfaction surveys. The fundamental problem, though, is that most panel companies are not bold or confident enough to implement this system, which could cause them to lose business, even though this is likely to be bad business from clients who are damaging their panels. Similarly, clients are not confident in the quality of their surveys to benefit from this offer ... or more likely clients know that their surveys are too long and boring but aren't committed to fix them.

Like Report

14 years ago

Mark is on the right track, a somewhat more client friendly appraoch is to simply report respondent satisfaction scores prominently in each client study. If we can correlate respondent satisfaction and research quality as Deborah has done here, then clients will begin to find importance in achieving at least average respondent satisfaction levels.

Like Report

14 years ago

A word of warning on the use of innovative question mechanics and the use of visuals and animation. As an industry we need to ensure that 'engaging' surveys comply with DDA and W3C compliance guidelines. The use of Flash within surveys often produces usability issues for the respondant both on the web and via smart mobile devices. Engaging surveys are great when they can be answered by the intended audience.

Like Report

14 years ago

From the R on R I've experienced, online surveys need to be no more than 15-20 minutes in length and be relevant and engaging for participants as well as gather valid and reliable data. BE cautious of changing the look and feel of the screen. I have seen data that the more visually exciting the screen, the less participants read and understand the stem-question. They then guess at what the question is asking and go right to the visually exciting scale. Also, we have found that data from different visual response scales (1-10, anchored or not, dials, sliders, etc.) on the same question, is very different. So, our challenge as researchers is to consider the purpose of the survey and the functionality of the survey programming software - and do what makes the most sense to gather valid and reliable data.

Like Report

14 years ago

@Deb Sleep : thanks for that! Interesting comments about DDA and W3C compliance too from Michelle. Given the non-support of Flash on new, emerging devices, do we all hold out for HTML5 or accept that a small percentage of respondents will not be able to use Flash? We're all aware that respondents are completing web interviews on phones and mobile devices, but given the limited screen real-estate, should we be preventing their participation?

Like Report

Results 1 to 10 of 13