FEATURE1 August 2009

The standard bearers

It feels like the industry has been discussing the issue of quality in online research forever. Robert Bain asks where the debate has got us, and where it should lead us next.

?It will come as a relief to many in the industry that discussions about cleaning up data quality in online research seem to be coming to some kind of resolution. Sample quality has been a thorny topic since the birth of online, but the last couple of years have seen a flurry of action as well as words. In one 12-month period, software for weeding out duplicate and fake respondents was introduced by companies including Mo’web, Peanut Labs, MarketTools, Western Wats, Greenfield, Globalpark and Mktg.

The targets of these tools are respondents who appear more than once in a panel or sample, and respondents who misrepresent who they are. According to MarketTools, maker of the TrueSample solution, there’s a strong correlation between people who lie about who they are and people who lie in the survey itself. The firm claims that failing to remove bad respondents from your panel can increase your chance of making the wrong business decision by as much as six times.

But others are warning that things aren’t that simple. In an article on Research-live.com, Patrick Comer of online researcher OTX argues that such technology-based efforts to address the issue of data quality are little more than a sticking plaster covering up a much more difficult set of problems.

One person who has observed these issues from every angle is Kim Dedeker – from the clientside as head of consumer research at P&G, from the agencyside in her new role as chair of Kantar Americas, and from the association point of view as a participant in the Advertising Research Foundation’s industry initiatives.

She told Research that the industry needs to move faster towards collaborative solutions. “One of the big conversations among clients is what can we do to accelerate progress,” she said. “I think we really are getting to the point where we can look at the big picture and assess what exactly are the problems with online, how do we set priorities in solving those problems, and then hopefully we’ll come together and align on some industry solutions… We really do need industry solutions, not individual supplier solutions, because… big companies like P&G don’t have capacity in any one or two suppliers, they actually need to use a whole portfolio of suppliers, so you want that consistency of approach.”

?”We’ve got a bit of defensiveness among all parties in terms of protecting the world as we have known it before”

Kim Dedeker

Perhaps one of the reasons that the online quality question hasn’t been dealt with as fast as it should have been is that the debate hasn’t always been particularly healthy, with too much of what Dedeker describes as “rock throwing”. She said: “I honestly believe that we’ve been missing one another with the conversation in large part over the past two years. The clients are focused on one part of the equation, the suppliers are focused on another, the industry bodies have been really trying to play a key contributing role… but we’ve got a bit of defensiveness among all parties in terms of protecting the world as we have known it before – our approach to research, our approach to the project within the clientside etc.”

Joel Rubinson, chief research officer at the ARF, told Research the debate has often been characterised by uncertainty and confusion. He describes the talk of professional respondents as being “like a big echo chamber”, with the idea taking root despite a lack of evidence, simply because everyone was talking about it. As Rubinson points out, the term ‘professional’ is usually positive, not pejorative.

Busting myths
Determining the real impact of professional respondents was one of the aims of a major research-on-research initiative launched by the ARF last year. Results revealed in June went against some of the received wisdom about online quality. Rubinson said: “The idea behind this concept of professional respondents is that online research in the US was really coming from a very small number of people, who were in it for the money and the rewards and had figured out how to game the system, and that this small group was on everyone’s panel. It turned out not at all to be the case.”

The ARF’s study, which involved 17 panel providers, found that only 16% of email addresses were present on more than one panel, and the people doing the most surveys tended to provide some of the most considered and reliable answers. Others have come up with different figures, differently sliced, which paint a scarier picture of the duplication problem, but Rubinson plays it down. He described the profile that emerged from the study: “Mostly people are on one panel. Some take surveys pretty frequently, and those who do are generally doing it with good intentions. So if the word professional refers to those people, it’s actually a very good thing. So hopefully the industry can find that not too disorienting, because usually that’s what the word professional does mean.”

?”Buyers need to start having conversations with suppliers about the sample sources that they use”

Joel Rubinson

But despite the attention that the problem of duplication has received, Rubinson said it wasn’t even the biggest issue the ARF’s researchers identified. “For me the number one thing that buyers and suppliers should be talking about right now is that panels are not interchangeable,” he said. “Buyers need to start having conversations with suppliers about the sample sources that they use, which is not a conversation that they’re having. The operations people within suppliers need to start managing how they source sample for a given study not just based upon sample availability or productivity, but also based upon data consistency. Some suppliers might have those rules in place. I’m sure others do not.”

Data collection firm Mktg has just released results from an 18-month study into panel data consistency, and has come up with a product to measure and track it, based on average responses to a standard questionnaire collected over time. Mktg’s president Steve Gittelman says what’s important is not what people’s responses were, but “the degree of variability of those responses from the norm”.

Rubinson says there is “light at the end of the online data quality tunnel”, and the ARF is now working toward the industry-wide solutions that the likes of Kim Dedeker have called for. Aware that people’s patience is running out, the foundation has set itself a 90-day deadline to publish recommended practices, giving it until mid-to-late September.

The bigger picture
Dedeker hopes that resolving the question of sample quality will allow the industry to focus on bigger questions such as survey design. “Perhaps the biggest driver of poor quality is length of questionnaire,” she said. “Some of the clients that we work with have questionnaires that are taking two hours, and perhaps more, for the respondent to answer. It’s a bit of a garbage in, garbage out scenario – we can clean up the science but if we don’t clean up the design… then we haven’t really won any battles.”

Amy Millard, vice president at MarketTools, said that while clients are glad to have the technology to deal with duplication, they’re ready to move on to new challenges. “When you talked to folks at conferences three years ago they would have thought that the problem [of sample quality] would have been solved by now,” she said. “So what I hear when I talk to clients is just a lot of frustration that we’re still talking about it. In particular these senior researchers are developing new ways of doing market research – Twitter has come on to the scene, there’s the rise of self-forming communities, there’s a lot of really exciting stuff happening online, and their frustration is to still be talking about sample quality.”

Online research is certainly maturing, and more and more companies are exploiting its potential, not just for faster, cheaper surveys, but for innovation and interactivity. London-based Verve Partners, launched last year, specialises in creating branded online communities for its clients. Founding partner Andrew Cooper, previously of panel firm Research Now, echoes the views of OTX’s Patrick Comer, who called bad respondents “a Promethean myth”.

Survey quality
Cooper told Research: “For me a big issue around respondent quality isn’t around the quality of respondents, it’s around the quality of the surveys you put in front of them. A lot of researchers think that straightlining or lack of people doing the survey properly is a panel issue or a people issue. It’s not. It’s a communications issue.”

Meanwhile another factor that’s changing how everyone looks at quality in research is the onset of the recession, with its simultaneous demand for lower cost and greater certainty pulling both ways on research providers. Cooper believes that, paradoxically, the severity of this recession might prevent quality losing out to cost because clientside research departments are faced with such stark choices. He said: “I think it’s a good time for researchers to have proper debates with their clients internally – do you want to do it well or not at all? And if you want to do it well, find more budget, or if you don’t believe it, let’s not do it and let’s not waste anyone’s time.”

As the industry starts to take a broader view of quality, it looks as if cleaning up the sample has been the easy part. Things are bound to get tougher as the questions get more complex but, if online can move from constantly being on the defensive to showing what it’s really capable of, things are also going to get a lot more interesting.

0 Comments