This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Find out more here

Saturday, 01 November 2014

This month we... became a fake respondent

Robert Bain goes undercover and signs up to a raft of online panels. He vows to fill in all the surveys. He regrets it.

Are online respondents who they say they are? Do they fill in surveys properly? Are the surveys any good? The best way to find out, we decided, was to sign up to a bunch of panels ourselves. For a month, I would become a fake respondent.

To start with I created a fake name and email address, and decided to give my occupation as IT manager to make sure I didn’t get rejected for being a journalist in the MR industry. Apart from that I would be as honest as the surveys allowed me to be. Obviously there is nothing remotely scientific about this report - it’s just an account of one person’s experience - so to prevent anyone leaping to unfounded conclusions we have chosen not to name the companies whose surveys we did. Suffice to say that if you work in research you know them all. We should also state that we did not redeem any of the incentives, even though I earned enough cash to buy a pint and a half down the local.

Day 1
I began my adventure by spending an afternoon going to the sites of all the panels I could think of and signing up. I tried for eleven but only got in to ten - the last one wanted US residents only. Once you’ve joined, the first thing you have to do is fill in lots of profiling questionnaires about your life and interests - like a cross between a Facebook profile and a tax return. For one company I had to do 15 of these, with grids of up to 90 tick boxes, while another made me answer 11 questions about a car I don’t own. It’s going to be a tough month.

Day 2
I completed four surveys today and got invites for many more. One of them made me promise that I’d “read each question thoroughly and respond to each question thoughtfully and honestly”. I felt slightly affronted that it was me who had to promise to behave, rather than the survey, but it didn’t matter as I soon got kicked out anyway. After lunch I started on another, which reached 50% on the completion bar before kicking me out on the grounds that it was “full”. Do I spot a pattern?

Day 3
The day begins with the oddest question yet: “Please select hamster from the list below: Cat, dog, bird, hamster, mouse.” No explanation is offered.

Day 4
Received an email this morning from one of the firms I signed up with on Day 1, asking me some rather probing screening questions. Am I the only person who uses the email account where the invites get sent? Why did I join the panel? How many online surveys do I do in a month? (I say 30, although with 27 days to go that could be an underestimate). We’ll see if they send me any more surveys now I’ve admitted being a panel junkie.

Day 9
I return from the weekend to find 18 invites in my inbox. Is that a lot? There’s a nice survey about an online drinks promotion, then another about YouTube that uses a slick interface to mimic the site. Then things take a turn for the worse: I share my feelings about what sort of car rental services I might use, for what purpose and in what areas, only to be told that the survey in question is full. The same thing happens in the next survey. When I finally find one that accepts me, it asks me to rate statements including “I look for a domestic appliance to be my ally” and “I want a funny and pleasant to use domestic appliance”. If these surveys were people, I wonder, what sort of people would they be?

Day 10
Three surveys kicked me out this morning after I had opened my heart to them about crisps and other personal matters. Have I been spotted as a fake? If so, why do they let me begin the questionnaires?

Day 11
A word of warning: not all the surveys billed as an interesting opportunity to express my point of view live up to the claim. In my book, comparing different combinations of different soft drinks at different prices does not count as an interesting opportunity to express my point of view.

Day 17
My stamina is failing and I’m checking my survey invites less often. Still, the one I clicked through to today was unusually pretty. Each row of the answer grid only appeared after I’d completed the row above, so I was always on the bottom row. A nice, effective design. Then it kicked me out.

Day 22
A revelation this morning: a survey that explained its screening procedure and warned me that it couldn’t guarantee I’d qualify. As a result I’m not nearly so annoyed when I get kicked out a few moments later. Later, a survey being run by a major MR agency breaks the record for the number of radio buttons: 1,302 in a 217¥6 grid. Ouch.

Day 31
With just minutes to go before the month is done, I click through to one last survey and begin to fill it in. It turns out to be a huge forest of buttons, and after 20 minutes I give up without finishing. What sweet relief.

Out of the shadows
In 31 days my fake respondent persona received 150 email invitations, clicked through to 99 surveys, started 73, got kicked out of 39, completed 30, crashed out of three and gave up on one.

“Everything we hear about marketing in 2010 suggests that indifference to brands and brand messages needs to be understood, not underestimated. Most online surveys aren’t helping”

So what did I learn? As to whether I was spotted as an imposter, it’s unclear. I received and completed surveys from half of the ten firms I signed up with, but I was screened out of more than half of the surveys I began, usually after answering demographic questions, and often after questions about my behaviour and opinions too. Whether they realised I was a fake or a duplicate, or whether they were just rejecting me on my answers, I don’t know.

I was asked about my survey-taking habits four times. Two companies asked me when I signed up if I was a member of other panels. After I told them I was a member of quite a few, the first of them sent me no more invitations, but the second continued to invite me to surveys (including studies being run by the first).

Two surveys from another company screened me out after I told them how many studies I’d taken part in recently - but the same company had already let me do a number of surveys without checking.

Doing lots of surveys for a month isn’t really enough to judge companies on their panel management, but it is enough to judge the general quality of online surveys. The first thing that struck me is that their design is pretty poor, in both its usability and creativity. Some surveys are slick and pleasant to navigate, but most aren’t. In the worst cases a combination of clunky systems, sloppy presentation, haughty instructions and awkward layout can create a feeling of disrespect, even rudeness, towards the user.

The way surveys are managed is often clunky too. Links remained active on panel websites for surveys that were closed or for which, according to what the company already knew about me, I was not eligible.

The second thing I noticed was that surveys are not good at being upfront and honest with the user. The single most common outcome of starting a survey (and the single biggest annoyance) was that I would spend time working through questions only to be told I wasn’t eligible. Was I being rejected for being fake? Perhaps in the cases where I was blocked right at the start, but in many instances I was allowed to get well into the survey before being turfed out without explanation, and denied the incentive.

The third point is that surveys often provide little room for the truth, forcing respondents to feign opinions when the only sensible answer is Don’t know / Don’t care. Apart from being frustrating for the survey taker (I’d rather not be asked my opinion than be asked and not be able to answer) this will surely only leave researchers with data that looks complete but isn’t. Everything we hear about marketing in 2010 suggests that indifference to brands and brand messages needs to be understood and not underestimated. Most online surveys aren’t helping.

Doing several surveys a day for a month would make anyone sick of them, so my feelings may be exaggerated compared to a typical panel member. But if the MR industry’s aim is for respondents to be wanting rather than just willing to do surveys, my experience over the past month tells me it’s got work to do.

Follow us on
Follow us on Twitter

Readers' comments (19)

  • I got a kick out your blog! This is a fight that a number of the top panels have been fighting for many years. How do we convince survey writers to write a quality survey that is so considerate and thoughtful of responders, that no data quality checks are required? Data quality is not a responder issue, it's a researcher issue.

    I'll offer the same answer as always. Quality companies need to SAY NO to bad research. Refuse to launch surveys that are too long, too boring, too badly worded.

    Maybe, in time, responders will warm up to us again and we won't have to resort to bad practices (100 invites per month is not unusual!) to get the completes we need.

    I could write a dissertation on this topic but once is enough. :)

    Annie Pettit
    http://www.conversition.com

    Unsuitable or offensive? Report this comment

  • Regarding hamsters, I believe the question was asked to verify that you were an engaged respondent who was reading the questions and following instructions. But perhaps your father smelt of elderberries.

    You ask, "If these surveys were people, I wonder, what sort of people would they be?" If these surveys were people, and you saw them walking towards you on the street, you would duck into the chemist to avoid them.

    Unsuitable or offensive? Report this comment

  • I suspect you're right about the motivation for the hamster question - possibly also to weed out bots? Either way, no excuse for not explaining it.

  • Excellent piece Robert.

    This certainly seems more diligent, robust and believable than a lot of the online research reports that I come across!

    Unsuitable or offensive? Report this comment

  • I agree with your comments and your experience is very similar to a lot of the respondents, including my own (not just those that sign up to multiple panels). Essentially as a researcher/panel owner if you want the best data you need to use your stored information correctly e.g. profiling data for targeting, be honest with your respondents about time of interview and sometimes even the topic especially in the case of long financial/pharmaceutical studies, be clever with your sceening questions in providing these up front so panellists are not kicked out after spending 10 minutes answering questions.

    Which leads me onto incentives and offering the right one against aspects such as type of survey, length of interview, does it ask personal questions, is your respondent in a low incidence group for example. In this day and age there really is no excuse for dishonesty, bad panel management or survey design as the tools are widely available.

    In respect of the respondent, these people have committed to you by means of joining your panel and this could be for many reasons, opinion giving, helping solve problems or as in some cases simply for the reward, but overall its topic/title, time and reward that motivate respondents to answer, whatever the reason if you want decent data you need to provide a decent survey environment for them to participate in, especially as you need to remember that your survey is essentially an ambassador/advert/marketing message for your brand and like these if you provide an engaging media environment the respondent is more likely to participate in further research/exercises - beneficial to everyone!

    Unsuitable or offensive? Report this comment

  • Great piece Robert - I'm sure a lot of people in research are members of panels to keep an eye on what is going on, and perhaps seeing that the competition isn't doing much different reinforces the notion that all is well and survey design does not need to improve. Hopefully the datasets aren't getting too full of researchers - what a problem that would be.

    One thing surprised me: you were getting filtered out from a number of surveys, and this would usually be because you don't qualify for the survey (I doubt very much that they programmed them to ascertain you were a fraud). I know some panels give you a smaller number of points for completing the screener to reward you for giving your time to the survey. They normally tell you this in advance so you don't feel cheated.

    It would be good to see this as standard practice if we value our dwindling pool of respondents.

    Unsuitable or offensive? Report this comment

  • Some of them did give me a smaller number of points, yes, and some of them made clear upfront that I'd only get the points if I got to the end of the survey. But not all of them. It wasn't generally clear at what point the screening ended and the survey itself began - they're happy to let you believe you've begun the 'survey' then tell you later 'This survey is full' or 'You don't qualify for this survey'. In some cases I reached 25% or 50% on the completion bar, and shared quite specific details about my views and behaviour before being turfed out without so much as a goodbye.

  • It is a very interesting article and heartening to see this topic get the focus it deserves. However, I feel a little short changed as I was expecting a bigger 'bang' if you like with regards to what you found and in what you reported back.

    I'm not saying that the article should have named and shamed individual companies, I mean, let's face it, we have all been guilty at one time or another of making that exception for a client who is willing to pay the right amount, and this is the crux of the issue, at least a big part of it.

    The reality is that in these commercial times that we live in and with increased pressure to deliver profits, ensure healthy margins and survive in what can be a ruthlessly competitive industry, the conditions are ripe for companies to be scared to say "no".

    If the right quality checks and push-back was applied then I would guess that perhaps a third of all surveys going live into panels would be stopped. It never ceases to amaze me how long people believe they can ask a participant to sit in front of a screen and take a survey for, often a badly designed survey at that. At Cint we 'cap' LOI and have a bee in our bonnet about it (its one of several bees). It can frustate the hell out of clients but we know it's the right thing to do.

    Ideally we would see more consultation between the end client, the MR agency designing the questionnaire and the supply vendor. It just doesn't happen enough and so we continue to see a gap in client vs. supplier expectations. I also agree with some of the earlier posts - it is all too easy to dismiss data quality as a responder/panel issue. At times it can be of course and this grates at those managing panels, I can tell you, because we all put in a significant amount of effort and investment in ensuring the appropriate quality controls are in place, from recruitment source through to survey completion.

    However, I think agencies need to step up and take more responsibility on ensuring the content of a survey is to the highest possible standards, something that sample suppliers have little or no control over (they should, at least in the cosmetics, LOI, screening questions, etc). If we continue to abuse participants time and treat them like canon fodder then we cannot expect anything other than bad data at the end of the process.

    Nothing is too late and there is definitely more good than bad out there. Personally I think the industry has come on leaps and bounds regarding quality and we and many others in this space work tirelessly day in day out to minimise any potential affects to the model that may lead to participant fatigue, attrition and frustration.

    One great point the article does highlight is the over-solicitation of members on panels. There is a lack of good data to say what the right frequency of invitations should be but it certainly shouldn't be multiple invites per day, even daily invites should be a concern. If profiling is up to date and well populated and the incentive the right balance (affinity vs value) then 'spamming' can be eradicated. Having your inbox filled up with survey invites within a matter of weeks is not going to float your boat to "give your opinion".

    A long rant I know but I feel a lot better for it.

    Unsuitable or offensive? Report this comment

  • Very funny stuff. I've been in the MR biz for 30 years and have seen it all, from my mall interviewer daze to where I am now, at a provider of online panels in asia.

    We often sign up with our competitors to see who is hiring them and also to see how they send their invites out, ie, what do they put in the emails about the survey? (A wee bit of competitive intelligence).

    It never fails to amaze me what I see.....

    Back in the mall days, we field directors at MR companies used to talk about the unspoken secret of the industry, ie, how much cheating was going on in the malls (our job seemed to always be to figure out ways of minimizing the collection of that bad data...). Turned out everyone knew about it (end clients, the MR folks, the mall facilities), but no one wanted to confront it, no one wanted to open up a giant can of worms..... See no evil, hear no evil......

    Now we have the same situation with a lot of web based data collection. There is a lot of crap going on, from piss pour qr design (a major problem), to insanely lengthy questionnaires to outright cheating (particularly in Asia and eastern Europe).

    Much of this is fueled by the buyers wanting to pay peanuts for the panel sample. There is very little sense of "partnership" anymore between the buyer and seller. It's all about maximizing profit. How can clean data exist at a dollar or less per completed survey?

    The reputable online panel companies will usually turn down that work, but like maggots crawling out of the ground, new online panel companies seem to spring up seemingly overnight offering insane low CPI's which the big boys buy into in ever increasing numbers.

    I can't tell you how many times we bid on a project only to be told "you're CPI's are 5 times higher than your competitors, can you match them?"

    And these competitors are the ones we sign up for to spy one, we see their emails saying silly stuff like, "fun survey for single moms with kids 3-5 who drink yahoo twice a week". Gee, talk about giving away the qualifiers up front.....(we see that a lot) and yet we see MAJOR MR companies hiring these firms........(like financial companies selling derivatives , not caring that the mortgages they are buying from the banks to bundle into these financial products are worthless......).

    So, cost seems to rule, not the quality of the data being collected. Greed rules, boys and girls.

    I suppose when enough end clients launch new products based on data collected on the web, and those products fail in the market place big time, then perhaps they will put pressure on the industry to clean up its act. After all, it was the end clients who fueled the expansion of online research by demanding online research be cheap, and quick and accurate. Perhaps the accurate part was lost somewhere along the way.......

    Hmmm. I could go on and on and on and on.....30 years in this industry......a lot of stuff floats by....BUT, don't get me wrong, there are a LOT of honorable market researchers out there, but they get marginalized by the greed of it all....and their voices are often silenced by the powers who have the most to loose.....

    Unsuitable or offensive? Report this comment

  • I completely agree with the comments and find the story an interesting read.

    Surely if the MRS was doing its job properly this would not be happening as there are many codes in the MRS Code of Conduct that suggest overly long / difficult to complete / unclear surveys / ambiguous questions etc are against the code?

    Most research companies are members of this society, yet does the MRS ever check that members are actively abiding by the code to ensure the protection of respondents against unsuitable research?

    Unsuitable or offensive? Report this comment

  • This does seem to be an area where the MRS should be involved, but despite endless codes and documents, I find it hard to see what practical impact MRS has in general. If there was a meaningful quality kitemark for suppliers, it would be easier for those of us working client side to justify not going for rediculously low bids.

    Unsuitable or offensive? Report this comment

  • MRS are not a police authority in the industry. The biggest issue is the questionnaire design. I have worked till now in 2 of the biggest MR agencies and currently working on one of the biggest fieldwork agencies, and I find it suprising that you would get a quota full message after 50% of completing the survey.. That is a golden rule on most agencies (more than 10 screening questions is a no no).

    The questionnaire design is the biggest issue. Market Research industry is famous for their "crap" pay in low exec jobs, and by putting junior staff on designing a questionnaire, you will of course get what you pay for. I think I am opening a can of worms here, but we can all scream and shout about this issue that is a constant concern from day 1 of online research, but unfortunately, nothing will ever change unless we change as an industry.. Which will not happen, so I will step back now and continue reading re-used articles and ideas.

    Unsuitable or offensive? Report this comment

View results 10 per page | 20 per page

Have your say

Please add your comment. You can include links, but HTML is not permitted.
Your email address will not be displayed on the site. All comments are moderated.

Mandatory
Mandatory
Mandatory
Mandatory