OPINION3 July 2009

Taking the pulse of the drive for online research accuracy

Features Opinion

As the online quality debate rumbles on, Knowledge Networks CEO Simon Kooyman argues for the creation of central database to capture information on an ongoing basis on how many surveys a respondent takes and in what categories.

We have seen a number of organised efforts at assessing quality deliver preliminary results and advocate specific changes in how online panels are managed, assessed, and applied. The Online Research Quality Council (ORQC) organised by the ARF has helped bring together findings from 17 opt-in online panel sources alongside mail and telephone modes and reached some notable conclusions through an extensive analysis of results. In addition, the consortium known as OpenSample has enlisted online research firms, large and small, to help understand and track respondent behaviour.

Knowledge Networks is pleased to have actively contributed to both of these efforts, and we will remain keenly engaged. But we must observe that any sense of early satisfaction within the industry needs to be tempered by awareness that there is more to do.

For example, from the above and other industry work we know that about four out of ten online respondents belong to more than one panel, with an average membership of roughly four panels – ranging from two to 13. This group also has a high probability of taking up to 20 surveys per month, or four to five each week. While recent studies indicate that this may not be a problem in and of itself, it is a further testable hypothesis to examine whether this group is also more than average stimulated by incentives – and that can be a concern.

For many types of research, these respondents may indeed provide valuable directional advice. But when the decision at hand is essential to your business, with strategic initiatives and major investments at stake, one has to wonder if “ok” is good enough.

Therefore, we continue to strongly propose that a central database be built to capture – on a confidential basis – how many surveys a respondent has taken and in what categories on an ongoing basis. This will create a first uniform standard for an important quality dimension (activity per category), lay the groundwork for the next dimension (duplication and overlap) and give clients and research firms alike the choice to select the appropriate quality level for the right type of research projects.

Similar quality standardisation exists in many industries with a significant commodity market as basis. I urge research companies and clients alike to take this no-brainer first step as an indication of our seriousness to bridge the gap in quality expectations.


1 Comment

15 years ago

Good article. Thumps up. I am sure Toluna and SSI will support this initiative, wont they?

Like Report