OPINION1 March 2010

Facing up to difficult questions about panels

Opinion

At Casro’s panel conference in New Orleans, researchers shared their efforts to answer some of the tough questions about how panels should and shouldn’t be used in order to get consistent, reliable results.

Jeff Miller, president of Burke and a conference co-chair, opened the conference saying: “A crisis of confidence has befallen our industry. We are not scientific enough, according to some academics, with not enough reliability… How can we get better at our craft?”

Challenges to consistency were discussed throughout the conference and included issues with sourcing, panel management and survey design.

While research has shown that different sample sources can lead to dramatic differences in demographic benchmarks, Jamie Baker-Prewitt of Burke researched whether such sample sources produce materially different patterns for consumer behaviour. For the items that market researchers typically investigate – category purchasing, aided and unaided awareness, and brand purchase – differences by sample were relatively small across panels and river samples (although there were greater differences in samples drawn from a social-networking aggregator and directly from Facebook).

The news was less positive from Paul Johnson and Bob Fawson of Western Wats, who discussed a detailed case study of four surveys drawing sample from three sources. They identified a source effect for 29.7% of the questions across the surveys. Survey routers that allocate a single stream of respondents to multiple surveys can dramatically change the pool of respondents available to lower priority studies, affecting consistency. Johnson and Fawson found a router effect for 13.5% of the questions they fielded. OTX Research is developing a measure of router bias to quantify the impact that a router can have on the results of a panel study.

Another factor that has long been known to affect consistency is survey design, but what hasn’t been studied is how this affects panel composition. Adam Portner of e-Rewards/Research Now presented research showing that “a panel member completing a ‘bad’ survey is twice as likely to stop being an active member of the panel as one that completes a ‘good’ survey”.

So what makes a bad survey? Nallan Suresh and Michael Conklin of MarketTools presented the results of a predictive model that showed how 20 design variables (e.g. length of survey, word count, percentage of matrix questions) drove 60% of respondent engagement. “Design does, in fact, influence respondent experience and behaviour, i.e. engagement, in a consistent way, and implies that survey designers do have some degree of control in maximising positive engagement and avoiding or minimising adverse engagement.”

Inna Burdein of the NPD Group showed how changing survey introductions by tone (incentive tone, friendly research tone, formal research tone) and by presence or absence of a face could change response rates. The interactions were complex, though, providing no easy conclusion.

The conference showed that industry participants are developing an exceptional amount of research on research in this area. While consistency remains difficult to achieve for access panel research, there is certainly consistency across providers in the drive to improve quality.

@RESEARCH LIVE

1 Comment

14 years ago

These questions are delightfully interesting from a research on research point of view and we have pondered them for many years. However, I know 100% for sure that they will never be answered to our satisfaction. It seems like we try and try and try to answer them but end up with little progress being made. I'm all for losing a bit of academic perfection at the expense of getting the job done - which is what we have been doing. We can keep having fun trying to answer the questions, but let's try moving ahead three steps, instead of two, for every step back we take.

Like Report