FEATURE1 May 2010

Online unplugged

As online research comes of age, have we faced up to questions about representivity? We brought together four industry figures to thrash out the issues.

Res_4002500_adrian_sanger_80

?Adrian Sanger is VP of research at Nielsen and a member of the Market Research Standards Board.

Res_4002500_tim_briton_80

Tim Britton is UK chief executive of online research and polling firm YouGov.

Res_4002500_jeff_henning_80

Jeffrey Henning is founder of US survey software maker Vovici and a prolific research blogger.

Res_4002500_terry_80

Terry Sweeney is operations VP for Europe at panel provider Research Now. He was previously with e-Rewards, which bought Research Now last year.

?

Research: Terry, do you recognise the problems relating to sampling and representivity in online panel-based research and how is Research Now tackling them?

Terry: We definitely recognise them because when we’re looking at an online population, there’s a bias from the recruitment methodology all the way down to technology issues, bandwidth issues, things like that. We try to educate clients on the biases that could exist within the sample population. People are looking at insight, and a lot of times representivity online is not achievable, but you can get a good predictive sample that will represent some of the leading thoughts within a particular sector. So we’re doing a lot of target sampling and making sure that we’re addressing some of the things that clients are coming up against.

“Every single methodology has flaws. Where online is particularly strong is that, because there’s this focus, we have to be so tight on what we do”

Tim Britton

Tim: Can I butt in there? I think we’re in really big danger of straight away going down a road of ‘online sampling has certain drawbacks therefore you can only do this with it’. Don’t get me wrong – I’m the first to say online sampling has loads of drawbacks. But you know what? Telephone sampling has loads of drawbacks, face-to-face sampling has loads of drawbacks. The fact is that every single methodology has flaws. I think where online is particularly strong is that, because there’s this focus, we have to be so tight on what we do, it often makes us as accurate as if not more accurate than some of the others. There’s sometimes the suggestion that the others are OK and online’s not. That just isn’t the case.

Jeffrey: We just fielded a random-digit dial study, and at the end of it you looked at it and there was nobody in the country under the age of 24. If you want probability sampling in its purest form on the internet then you have to do something like address-based sampling, where you’re identifying people who do not have computers and giving them computers, which obviously is a very expensive methodology. These are not problems that are given to a particular mode, they’re broader issues that affect all modes of data collection.

Tim: The other debate we often have is that the statistical sampling theory is based on random sampling of infinite populations, very roughly. When we are researching commercial issues we are never looking at an infinite universe and we’re seldom looking at the total universe. But if you apply rigour to identifying clearly what the universe is, then at least you understand the biases that you’re introducing – because you always introduce a bias whenever you do research. Then you end up with as much certainty as possible that the results you have are projectable. And that applies to us doing research howsoever we’re doing it.

Research: What do the MRS standards say about online sampling and quality?

Adrian: The internet guidelines have been in need of an update. They were last written in 2006. Over the course of the next six months we’re going through another consultation phase where we’ll re-write the internet guidance on the back of the revised code of conduct.

Research: That’s a four-year gap since the revision. Is that enough to keep up?

“Clients should ask more questions, and if providers are not asked the questions, we should answer them anyway”

Adrian Sanger

Adrian: It’s a wide gap. The revision is badly needed. That’s why it’s a priority in 2010. We’re not going to rush that because of the complexity of the subject. There are a number of touchpoints and that means we need to go through an extensive period of consultation. To your question of what this all means from a standards point of view: it’s about the right tool for the right job. We believe that online, whether that’s quota sampling or representative sampling, can do that job. The standards issue, actually, is about the practice of research, and whether in an era of DIY research we still have enough attention being given to asking the right questions upfront about who you’re talking to, creating the right survey instruments and all of those sorts of things. Online has made it possible for literally anybody to create a survey about anything. So there’s a need for us to ensure that we don’t lose sight of the values that we’ve fought so hard to maintain.

Terry: What we’ve seen is the deterioration of a valid instrument that’s been tested or normed, or has any type of really good survey design worked into it. I think some people put that into the camp of sampling, because people come to the sample vendors with those very poorly written or one-hour surveys that are extremely boring, they’re rating table after rating table, and that erodes the sampling part as well. Keeping those standards intact so that people actually understand how to build their research methodology impacts a lot on what you get on the other end.

“We can’t assume that there’s just one aspect of survey quality. You have to look at everything”

Jeffrey Henning

Jeffrey: We can’t assume that there’s just one aspect of survey quality. People calculated a margin of error on access panels when they shouldn’t have, because it was easy to calculate, and kind of forgot that it didn’t apply, and so people have overly concentrated on that. To improve the quality of research, whether it’s offline or online, you have to look at everything: coverage, non-response bias, leading questions, ‘satisficing’.

Tim: That’s a very valid point. What happened is online made research faster, but the bit that got faster was the data collection. The thinking time at the beginning and the end, in theory, shouldn’t have changed at all, but everything somehow got crushed down at the same time. I suspect that the biggest effect on differences in survey, when it’s put down to mode, is often about what I would call a design effect. I’ve sat in a room with a client and they’ve said, ‘That data’s different to that data, that’s because that was online and that was offline.’ Nonsense – it’s because that’s a rubbish question and that’s a good question.

Adrian: You’re right, the expectation of crushing those timings meant that everything got condensed, and that wasn’t smart. The net result of all of that is a less strong survey industry.

Research: Tim, how often do you find yourself defending YouGov’s online methodology to sceptics?

Tim: Quite a lot, it’s true. We’re often surprised – we have to remind ourselves: ‘Oh gosh, people still don’t get that.’ When we talk about a ‘projectable result’, there are projectable results out there called general elections and time after time we’ve shown that what we do can be projected to the population that we were surveying. It doesn’t get any clearer than that, and that sometimes has made us a bit lazy or forgetful that there’s still a job to be done in reminding people. In pretty much every proposal that we put out to clients we will have an overt discussion on representivity.

Adrian: Although I suspect that that’s uncommon sense these days. In the field of commercial research, more and more, the scientific side of it is largely ignored. It is online. Why is it online? No discussion.

Jeffrey: You’re quite right. There’s no discussion of where the sample came from, how it was derived. All of that is just left. So if you look at it and try to think ‘How did they actually do this?’ you have no idea. It’s not discussed at all.

Adrian: Clients should ask more questions, and if we’re not asked the questions we should answer them anyway.

Research: So what should we be focusing on when thinking about how to do research better? Terry, what do your clients look for?

Terry: We deal with so many different ranges of clients, there’s a lot of variability. Sometimes they’ll ask, how can we craft or design the questionnaire to make it a little bit more reliable, in terms of putting in trap questions or getting through some of the mechanisms to make sure the people are who they say they are, things of that nature. But clients are also asking us to kind of push to the next level of: what are we innovating with? Where does online panel go tomorrow? I think a lot of the ‘online methodology doesn’t work’ gets dumped on the sample vendors – ‘it’s bad quality sample, it’s just bad data’ when most of it, if they actually took the survey they’d created, a lot of our clients probably would flatline out or get kicked out. It’s garbage in, garbage out.

Jeffrey: Research Now had a presentation at the Casro Panel Conference about how panellists are more likely to quit the panel as a result of taking a bad survey. So do you need to have surcharges on bad surveys? It’s a tough situation because your customer’s coming to you with this questionnaire and they just want you to get the answers but it’s a lousy questionnaire and it’s not going to meet their needs. And it’s not going to meet the long-term needs of your panel. That’s an awkward position to be put in.

Terry: It is, and in some cases we will actually say, Sorry, we can’t run this. Here are the things that will give you the guidance to make it something that will run, but we have certain guidelines. There are certain clients that come in and say, We want you to run this, and it’s like: I’m sorry, there are 15 different questions packed into one giant ratings table and I can’t get through it as a researcher. It’s tough.

Research: Jeffrey told us recently that the industry should be describing surveys done in access panels as qualitative rather than quantitative because they are ‘representative of their respondents only’. Terry, do you agree that panel-based research is qual, not quant?

“The quality of online research doesn’t come down to just the online panels, it’s a much broader picture, and you get out what you put in a lot of times”

Terry Sweeney

Terry: It depends how you’re defining qual and quant. If you’re defining it in a statistical sense, where you need to be able to run the statistics and you need this measure of error, then absolutely I would agree with the statement, because it is more about insight generation in those types of issues, where you’re looking for ‘significant’ findings not ‘statistically significant’ findings. When I was on the end-client side, when we did the research there was a lot more interest in ‘Wow, we didn’t realise our customers were thinking that’, rather than saying ‘That one’s statistically significant… that one’s statistically significant’ – because it didn’t really mean anything.

Adrian: The point is it’s still quantitative data, but it’s representative of something that we’re not exactly clear about.

Tim: Part of the whole misunderstanding about this is that some of the statistical tools that are used are simply wrong when applied to panels. Actually they’re simply wrong on a whole bunch of different levels. I think the whole statistical significance thing just gets in the way a lot of the time. It’s about whether the story the research is revealing is an interesting, useful and correct one. And as long as we know we’ve asked the right group of people, used a good questionnaire, we’re happy with the audience that that has come from, then the story – and I use that word both quantitatively and qualitatively, because if research can’t tell a story it’s just nonsense in the first place – the story that emerges is a helpful one.

Research: ‘Story’ is a bit of a wooly word, isn’t it?

Tim: No, not at all. When a piece of research is commissioned, clients don’t really want to know is it 89% of people that prefer one particular type of toothpaste over 82% that prefer another, it’s about why is this happening. Why are they behaving this way? What is it that people are thinking? The whole point is about understanding what that data is saying to you. I use the word story to describe what the meaning is of the research. It’s a narrative, if you want to use a more formal word. We’re in the communications business – if people can’t understand what we’re doing, if it doesn’t have an impact on the decisions that they’re making, then there’s no point in doing it in the first place.

Adrian: If you like, insight is the combination of the number and the narrative, and that, in my mind, is the story. If some of our solutions are less rigorous and more qualitative but they enable us to insight generate, job done.

Research: In that case the issue is about how you present and sell your methods. How well would you say clients understand what they’re buying when they buy online research?

Adrian: Well, they’ve stopped asking, and if they’ve stopped asking, there’s almost certainly some gaps in their knowledge. Let’s make sure that in our deliveries the small print doesn’t get left off the page.

Tim: I would question your use of ‘online research’ – your question should have been: How well do clients understand the research they are buying? It is not a question for online research, it is a question for research. I think the debate would be far more helpful if it were framed in terms of, what does research need to do as a whole to do better work for its clients. At one time online research was, if you like, the upstart, and there were reasons for not wanting it to be, but now we’ve moved on a bit and I think there is a danger that commercial pressures stop us having the right methodological or even philosophical debates.

Adrian: Online is a reality. Get used to it. A lot of the time, commercial research doesn’t require a finessed, probability-based solution in the interests of insight generation. If I’ve got cuts and bruises and I go to the GP I don’t want a general anaesthetic.

Terry: The quality of online research doesn’t come down to just the online panels, it’s a much broader picture, and you get out what you put in a lot of times. Take it for what it’s worth, understand what you’re doing and move with it.

4 Comments

14 years ago

Useful discussion, and wondrous definition of insight by Adrian 'number + narrative'. Tim's point about data representativity being a broader research issue (requiring an understanding of research) that can't be meaningfully discussed in isolation is key; you can't discuss online research representativity without a discussion of research representativity in general. And it's nice to see a discussion that moves beyond the 'bad faith' of researchers still peddling crude positivism and that ends the conspiracy of silence over research validity. Whether research buyers are ready to buy into interpretivism, and like Neo in the Matrix, down the rabbit hole they go, will be interesting to see. At best survey data will be representative of the 2% or so of people who participate in market research studies. Research reliability is not research validity. Whether vendores misassumptions crude Victorian positivism and toward interpretivism when some vendors are peddling it hope that like Neo in the Matrix, down the rabbit hole of interpretavism they go

Like Report

14 years ago

You should have included Steven Gittelman from MKTg on your panel. That would have livened things up a bit.

Like Report

14 years ago

Tim's wise rhetorical question, "how well do clients understand the research they are buying? It is not a question for online research, it is a question for research" is indeed the crux of the matter, but I fear Paul Marsden is misguided if he means to suggest that applying the label of 'interpretivism' circumvents the problem. Interpretivism is not useful if the starting objective is to be representative. On the other hand, I am confident that there is a valuable place for online qualitative research (especially in conjunction with conventional methods), for which interpretivism is exactly what you need. It is a weak version of research when you hope for "insight generation" as a consolation because the research isn't fit for the originally-intended purpose; good qualitative methodologies are available as an alternative, and should be designed in as first choice.

Like Report

14 years ago

I have attended industry events in Australia and the UK this year where there have been excellent, stimulating presentations on how the (digital) world is evolving, how customer communications are changing, the wide range of (new) tools now available for research and engagement, fascinating case studies of insight being generated from online research and engagement......and within 10 minutes of these refreshing presentations being delivered, researchers in the audience and on the stage, are locked in a debate about the representativeneness or representivity of online access panels, or worse - they're arguing about whether the word should be representativeness or representivity! So the underlying problems appear to be how we as researchers and a research industry cope with change in our environment, challenges to the status quo, evolution in our approaches to meet the needs of clients, challenges to our belief systems, etc. If we don't address these issues, we continue to struggle with change and uncertainty, we continue to resist new methods, we continue to be adversarial in our communications, we remain institutionally conservative as an industry, and we continue to miss opportunities for growth and development in our mutual best interests. When I run workshops for researchers, I often ask them to go to different corners of the room depending on their underlying belief systems and then have conversations between different 'belief clusters'. It is always insightful, especially around issues such as the degree to which we can precisely measure something and the extent to which it is important to precisely measure something. There is a much wider range within our industry than you might expect. Another stark differentiator is on the extent to which quality is an objective, measurable standard that can be internally-defined or a more subjective and imprecise construct that emerges from conversations with clients to establish their needs at any point in time. That usually divides the room, with some incredibly dogmatic initial positions, and then brings forth enlightenment from reminding participants that there is no universal truth - unless, of course, you fundamentally believe that there is such a thing as a 'universal truth' (it's another one that divides the room!).

Like Report