OPINION29 June 2009

Bad respondents: a Promethean myth

Features Opinion

Online survey research is widely considered to have a data quality problem. Conventional wisdom says ‘bad’ respondent behaviour is to blame. But pointing the finger in their direction ignores the industry’s culpability in conditioning respondents to behave in this way, says Patrick Comer, senior vice president of OTX Research.

Each effort assumes that something is wrong with how respondents are taking surveys. However, users are only doing what panels ask them to do: take surveys and lots of them.  Much like in Frankenstein, it is easier to focus on the monster rather than the root cause: that technology and industry growth has created this respondent behaviour in the first place. Companies only cloud the issue by using technology filters on behaviour and blaming the respondent for poor research quality, labelling them with such monikers as bad, fraudulent, inattentive, unverified, speedster, and cheater

Problems are to be expected as the industry is still catching its breath from the technology tsunami that’s triggered the transition from phone to online surveys. Methodologies for both data collection and sampling are still rapidly developing, with easy access to cheap respondents driving the change. Demand for online panel has been so high that new sample companies pop up every year. Yet even with new sources of sample coming online, the incoming tide has created over-usage that is degrading the quality of the survey-taking stock. 

Tragedy of the commons[i]
Our industry is subject to the challenges of the “freedom of the commons”, since respondents are a shared limited resource. Clients, market researchers and sample companies use the same pool of respondents and are rationally inclined to over-usage.

  1. The majority of profit growth for sample companies is through selling completes on a CPI (cost per interview) basis. They have every incentive to create an environment which increases usage especially when more profits are made from an existing panelist taking more surveys than from recruiting a new one.
  2. Market research firms are always looking to lower costs as they work in an ultra-competitive marketplace and one of the key factors in considering margins is cost of sample. 
  3. 2009 is the year of budget cuts, so the end client is also looking for lower CPIs and project costs. 
  4. The entire value chain is demanding higher usage and lower costs which forces sample providers to use their respondents more often, creating inferior data collection behaviour.

End clients react to this by questioning the authenticity of the analysis and sampling, and this forces research firms to investigate data collection behaviour and define key challenges: speedsters, straight-lining, satisficers, etc. Sample companies use a combination of technology – to prove quality and filter out those behaviours – and marketing messages that implicate the respondent. We must ‘purify, verify, de-duplicate, rehabilitate, and sanctify’ them. But respondent over-usage is not solved by filtering out the bad behaviour. In fact, technology further decreases the number of available respondents and quickens the conditioning of those that do pass the filters. 

Sample companies are now forced to use respondents more efficiently, either through routing mechanisms or buying sample from third parties such as other panels, rewards programs, social networks, and river methodologies. Given that no panel or research firm can control the majority of sample supply, broadening the recruitment is only a short-term solution as the current demand for sample will continue to create conditioned respondents at a rapid rate. 

Over the past 10 years, clients have been convinced that panel sampling is the gold standard. Online sampling is rapidly evolving and our ability to explain and justify that change has fallen short. We are now in a multi-source sampling world yet most clients neither understand nor accept this transition. The panel standard as the only acceptable recruitment process is now hindering progress.  Repositioning sample as a multi-sourced and respondent focused environment is vital. Quality concerns will only continue to grow unless all stakeholders understand the problem of respondent over-usage and its impact on research quality. 


[i]Garrett Hardin, “The Tragedy of the Commons”, Science, Vol. 162, No. 3859 (December 13, 1968 ), pp. 1243-1248.

0 Comments