Fit-for-purpose sampling in the digital age

According to the European Society for Opinion and Marketing Research (ESOMAR) Global Industry Study 2014, online research is now the principal mode of research in all the top ten research markets, with the exception of France (where it ties with automated interviewing) and China, which does not report a figure.

Res_4013039_online_survey

In the UK, online accounts for 29% of research spending, compared to 9%, 10% and 11% respectively for telephone, face-to-face and qualitative research. It has become the most commonly-used mode of quantitative data collection.

But in spite of the huge volume of research being undertaken online, people are still arguing about the quality of the samples being used, whether the results can be trusted and how to develop an objective measure of reliability.

The CRO of Peanut Labs, Annie Pettit, recently hosted a webinar on Using Margin of Error with non-probability panels, which was attended by more than 600 people: a considerable turnout for a topic that was once relatively uncontroversial.

So why the interest? There continues to be an unresolved and frequently unscientific discussion about the validity of online samples. They are undoubtedly much cheaper and quicker for most kinds of research. Online polling has proved to be pretty accurate in countries where internet penetration is fairly high, and yet there is still quite a lot of evidence that the results from online surveys can be misleading, or even wrong.

On day two of the Market Research Society’s Annual Conference, Impact 2015, The International Journal of Market Research (IJMR) will host a debate on fit-for-purpose sampling in the digital age. The discussion will be chaired by Adam Phillips, chair of the ESOMAR Professional Standards Committee. On the panel will be Reg Baker, co-chair of the AAPOR Task Force on non-probability sampling, and two of the world’s leading online research practitioners: Doug Rivers, professor of political science at Stanford and chief scientist at YouGov; and Corrine Moy, global director of Marketing Science at GfK. Our aim is to inject a dose of science and some practical advice into the ongoing public debate about online research.

The IJMR intends this session to provide an opportunity for a serious discussion about when, and how, to do good online surveys that can be trusted. Discussion topics will include:

  • How serious is the sampling quality problem for online research?
  • How can I judge whether findings from an online survey are reliable?
  • How can I commission online research which is ‘fit-for-purpose’ in meeting my needs?
  • Are we accepting a culture of limited transparency and “caveat emptor” which risks undermining trust in the whole research industry?
  • Is the pressure for low cost and quick results leading to a serious compromise in research standards?

Peter Mouncey, editor of the IJMR has already shared his thoughts on these questions. If you would like to join the discussion please come along on Wednesday after lunch.

We hope you enjoyed this article.
Research Live is published by MRS.

The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.

Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.

For example, there's an archive of winning case studies from over a decade of MRS Awards.

Find out more about the benefits of joining MRS here.

2 Comments

John Grono

One further issue to consider is how 'deep' are the online pools we dive into to get our online samples? I work with a couple of audience measurement systems here in Australia which require broad geographic coverage but quite granular geographic reporting. We're finding that we quite quickly 'exhaust' the available pools of potential respondents. And that is just two surveys (albeit large ones). The potential (real) impacts on recruitment bias and completion rates is quite scary.

Like Report

Display name

Email

Join the discussion

Derek Nash

Perhaps in Australia we have a higher percentage of online panelists who are associated with several panels, therefore thinning and exhausting our sample universe?

Like Report

Display name

Email

Join the discussion


Display name

Email

Join the discussion

Newsletter
Stay connected with the latest insights and trends...
Sign Up
Latest From MRS

Our latest training courses

Our new 2025 training programme is now launched as part of the development offered within the MRS Global Insight Academy

See all training

Specialist conferences

Our one-day conferences cover topics including CX and UX, Semiotics, B2B, Finance, AI and Leaders' Forums.

See all conferences

MRS reports on AI

MRS has published a three-part series on how generative AI is impacting the research sector, including synthetic respondents and challenges to adoption.

See the reports

Progress faster...
with MRS 
membership

Mentoring

CPD/recognition

Webinars

Codeline

Discounts