FEATURE6 June 2009

The good, the bad and the ugly respondent

Features News

Getting under the skin of so-called ‘bad respondents’

US— First we wanted to believe bad respondents weren’t there, then we wanted to chase them away. Now everyone wants to get to know them.

The Advertising Research Foundation (ARF) has just reported back from a study in which it set out to answer what it saw as the unasked question of whether a ‘professional’ respondent really constitutes a ‘bad’ respondent, and whether there might actually be good things about dedicated survey takers.

A newly-formed committee of the ARF is now looking to further explore the behaviour of different types of respondents, and come up with a ‘behavioural index’ to help understand them.

The foundation’s initiative came as the industry began to walk the talk on online sample quality, with firms such as Peanut Labs, MarketTools and Western Wats all touting their panel-cleaning solutions.

MarketTools, maker of the TrueSample tool, is encouraging a more enlightened approach to the issue in the light of results revealing more about the types of respondents who are out there and how they behave.

In a webinar sponsored by the firm yesterday, Amy Millard, VP of marketing for TrueSample, warned that the challenge is not as simple as identifying the ‘good’ and the ‘bad’.

‘Bad’ respondents, she said, can be divided into those who are ‘fake’ (using false details) ‘duplicate’ (appearing more than once in the panel or sample), and ‘unengaged’ (not answering questions properly).

According to MarketTools’ findings, duplicates are likely to be more enthusiastic, giving more positive answers to survey questions, while fakes are less enthusiastic. This makes some kind of sense – those who are trying to take as many surveys as they can are more enthusiastic than others, while those who feel the need to conceal their real identities are more sceptical. Millard said: “If [these types of respondents] were in there at a balanced level you could imagine that it would all work out, but they’re not.”

Cynics might say it’s hardly surprising that MarketTools paints a scary picture of a problem for which it is selling a solution, but its conclusions certainly raise some interesting questions.

The proportion of ‘bad’ respondents detected by TrueSample over a three-month period last year was higher than expected: 28.8% for a single panel, increasing to 47.9% when two panels were combined. And just when you thought you knew what to do about it, the differing attributes of fake, duplicate and unengaged respondents mean that removing some but not all of them could actually make data less reliable.

In another study published late last year, the firm claimed that increasing sample size was not a guaranteed solution either, as “small differences in how good and bad respondents answer questions are more likely to be statistically significant as the sample size grows”.

Microsoft’s research director Steve Schwartz said the key to dealing with these issues is transparency. “We now ask for much more detailed sample dispositions to understand how many people are coming internationally or in the middle of the night and filling out surveys, how many are speeding and failing track questions – what happens to those people and what are their responses looking like? It’s opened up a lot more discussion between us and the sample providers. It does take more time from the clientside than it has historically, but it’s certainly necessary because we’re seeing the level of professional respondents and fakers, for b2b in our case, to be high enough that it impacts the results.”

John Ouren, general manager of panels and communities at MarketTools, agreed. “Folks that I’ve engaged with on the clientside have learned a lot by really engaging and understanding the practices that are important for their sample providers, how they’re recruiting their members and how panels are managed, which historically has been a relatively opaque topic. I think transparency into how panels are managed … just helps to deliver a greater degree of confidence in the sample that’s being provided.”

Author: Robert Bain

Related links:

ARF committee takes on online quality challenge

MarketTools counts the cost of bad respondents

Are professional respondents really so bad? ARF investigates

Panel quality tools set to ‘change the online research game’