OPINION20 August 2015

Removing the jokers from the pack

Opinion

As a quantitative executive who specialises in online methodologies, I have come across my fair share of suspicious looking data when we’ve used panel sample – the ‘jokers’ that could compromise the results we deliver to our clients. 

Res_4013776_joker

In fact, in a recent study conducted by McCallum Layton via a UK-based panel provider, a shocking 352 completed interviews out of a total base of 2,000 had to be removed and replaced due to various quality control issues, which included:

  • ‘speedsters’ – those people who complete the survey far too quickly for their answers to be truly considered
  • ‘flatliners’ – those who repeatedly give the same answer
  • nonsense verbatims – random letters, or responses that don’t answer the question
  • contradictions in responses – e.g. a respondent says he has a son, but then later in the survey, the son magically disappears
  • offensive language – I’m all for passionate responses, but when the respondent has simply filled the space with swear words, they have to go!

Bearing this in mind, we really owe it to our respondents to provide them with engaging and stimulating surveys to make sure they don’t get bored.  But when the average panellist is on 5-6 panels, and receiving many invites per week, it’s difficult to make our surveys truly stand out.

Most issues come from real-life respondents, but one of the most worrying trends for me is the growing sophistication of automated programs, designed to ‘cheat’ our carefully constructed questionnaires. 

While checking the data on a different survey, we found 30 completes that seemed to draw on a standard set of around eight verbatim responses – the phrasing, punctuation, spacing and spelling mistakes were identical, and couldn’t have come from unrelated ‘real life’ respondents.  More worryingly, these verbatims all referenced the topic of the questionnaire, so wouldn’t necessarily be detectable to the untrained eye. 

When we approached the panel company to report this, they said the IDs in question came from 30 completely different IP addresses, and they simply couldn’t have uncovered these fraudulent responses using their own initial checks.  Once some retrospective digging was done, the perpetrators were found, but the panel provider wouldn’t have been aware if we hadn’t flagged it.

Interestingly, when the same survey was relaunched over a year later, we spotted the same bank of eight verbatims being called upon again.  Having just completed the fourth wave of the research, it’s still an issue and despite changing panel provider, we have to remain vigilant to this kind of activity. 

So I think it falls to us – the researchers and analysts – to give detailed feedback to our panel partners to root out the people that are consistently providing us with unreliable data.  Speaking to others in the industry, I’m not sure that the process of checking data quality is deemed to be as important as the analysis and reporting stages.  If everyone contributes to this effort, we can help to drive sample quality to the top of the agenda.  And if these fraudsters are proving elusive, we need to (at the very least) replace these interviews so our clients are always getting the best possible quality of data.

Laura Finnemore is a senior research executive at McCallum Layton

@RESEARCH LIVE

8 Comments

9 years ago

The jokers in the pack - Love it Laura! Thanks!

Like Report

9 years ago

Great article, Laura - thanks! Out of curiosity did you notice the "jokers" coming from one particular market or was this a UK based study? Do you do all of your quant studies online or do you ever use CATI instead?

Like Report

9 years ago

Laura, A nice post. You explain well some of the challenges facing sample providers. We can and do have extensive quality checks, but fraudulent responders still get through. I advocate that clients understand three key things: the way the respondents get to the survey, the source of the respondents and the way those respondents behave in the survey itself. Each area requires scrutiny and are large topics by themselves. I do feel however that open end analysis is one of the best final checks and it is often an important indicator of overall survey quality.

Like Report

9 years ago

The article raises important points, but remaining vigilant isn't enough. If you are just using manual processes to find the "jokers", you'll be spending a lot of time, and you'll probably miss some. Really you need automated solutions that can identify likely cheaters and weed them out as you are collecting data. That way you can filter them before you close quotas. Of course you can combine that with some manual efforts, but if you are relying solely on vigilance it's a losing battle. Feel free to contact me or my company (IntelliSurvey) for info on how we approach this problem.

Like Report

9 years ago

Thanks for the article. It gives a true snapshot of what happens with respondents who volunteer for panels and are more concerned with getting incentives like points or cash then accurately giving responses to the questions. So-called "Professional Respondents" are a curse to our business.

Like Report

9 years ago

Although Laura did point it out, I think we as survey authors need to take far more of the blame than we do/are. First, our surveys are BORING and LONG and POORLY WRITTEN. How can we possibly expect a responder to take us seriously when our questions are long and convoluted, when we don't provide every relevant answer, when we expect people to know things about themselves that we don't know about ourselves? Seriously, how many bars of soap did you buy last year? What did you have for dinner on Tuesday? Second, why must survey responders be these amazingly 100% fully attentive and 100% fully engaged people when we can't even write an email without checking our facebook status and getting a snack at the same time? We are very often unreasonable about what is reasonable. We can't expect people to be perfect all the time every time. Human beings get bored. Humans get tired. Humans have babies at their feet, dinner on the stove, emails to send, phone calls to answer. Even the best responders make errors now and then. I personally make errors all the time. Because I am human. Please. Let's get real about what is reasonable.

Like Report

9 years ago

Amy - thanks for your feedback. Both examples were UK based studies. I'm an online specialist, but we also cover CATI and F2F at McCallum Layton. These particular examples required an assessment of designs and product descriptions amongst a fairly low incidence group of people, so we opted for an online approach. Rob - we use a combination of automated and manual approaches to check data quality. Personally speaking, I find automated checks don't go far enough, and that's why we look at elements such as verbatims in more detail. It does take time, but I feel it's a very important process. Anne - totally agree. I also believe that we, as an industry, do not spend long enough on the 'look and feel' design element of online surveys. Many agencies offer UX research services to clients, but don't seem to apply basic user experience principles to their own online surveys. If we made a concerted effort to improve the way our online surveys look, and also addressed the issues you've highlighted within questionnaire design, we might be in a better position.

Like Report

9 years ago

Dear brave Laura, thank you for such honest article. I'm far away from GB (i'm in Russia) and far away from 'normal' MR (consumer research) but even here in Russia, i have lots of responses regarding online samples from clients like 'come on it is all fake" 'hell no only students and housewifes there' etc. And all of this comes from real clients. After years of partying (incontinuous hype about online online online) the morning came. Samplers need to reconsider their approach, otherwise clients will be even more skeptical. Cheers Roman

Like Report