OPINION13 March 2015

Fit-for-purpose sampling in the digital age


According to the European Society for Opinion and Marketing Research (ESOMAR) Global Industry Study 2014, online research is now the principal mode of research in all the top ten research markets, with the exception of France (where it ties with automated interviewing) and China, which does not report a figure.


In the UK, online accounts for 29% of research spending, compared to 9%, 10% and 11% respectively for telephone, face-to-face and qualitative research. It has become the most commonly-used mode of quantitative data collection.

But in spite of the huge volume of research being undertaken online, people are still arguing about the quality of the samples being used, whether the results can be trusted and how to develop an objective measure of reliability.

The CRO of Peanut Labs, Annie Pettit, recently hosted a webinar on Using Margin of Error with non-probability panels, which was attended by more than 600 people: a considerable turnout for a topic that was once relatively uncontroversial.

So why the interest? There continues to be an unresolved and frequently unscientific discussion about the validity of online samples. They are undoubtedly much cheaper and quicker for most kinds of research. Online polling has proved to be pretty accurate in countries where internet penetration is fairly high, and yet there is still quite a lot of evidence that the results from online surveys can be misleading, or even wrong.

On day two of the Market Research Society’s Annual Conference, Impact 2015, The International Journal of Market Research (IJMR) will host a debate on fit-for-purpose sampling in the digital age. The discussion will be chaired by Adam Phillips, chair of the ESOMAR Professional Standards Committee. On the panel will be Reg Baker, co-chair of the AAPOR Task Force on non-probability sampling, and two of the world’s leading online research practitioners: Doug Rivers, professor of political science at Stanford and chief scientist at YouGov; and Corrine Moy, global director of Marketing Science at GfK. Our aim is to inject a dose of science and some practical advice into the ongoing public debate about online research.

The IJMR intends this session to provide an opportunity for a serious discussion about when, and how, to do good online surveys that can be trusted. Discussion topics will include:

  • How serious is the sampling quality problem for online research?
  • How can I judge whether findings from an online survey are reliable?
  • How can I commission online research which is ‘fit-for-purpose’ in meeting my needs?
  • Are we accepting a culture of limited transparency and “caveat emptor” which risks undermining trust in the whole research industry?
  • Is the pressure for low cost and quick results leading to a serious compromise in research standards?

Peter Mouncey, editor of the IJMR has already shared his thoughts on these questions. If you would like to join the discussion please come along on Wednesday after lunch.



8 years ago

One further issue to consider is how 'deep' are the online pools we dive into to get our online samples? I work with a couple of audience measurement systems here in Australia which require broad geographic coverage but quite granular geographic reporting. We're finding that we quite quickly 'exhaust' the available pools of potential respondents. And that is just two surveys (albeit large ones). The potential (real) impacts on recruitment bias and completion rates is quite scary.

Like Report

8 years ago

Perhaps in Australia we have a higher percentage of online panelists who are associated with several panels, therefore thinning and exhausting our sample universe?

Like Report