FEATURE1 March 2011

Big bad wolf? Or fairy godmother?

Free or cheap online tools mean just about anyone can run a survey these days – and millions of people are. SurveyMonkey’s Philip Garland says it’s time research agencies tapped into the power of DIY.

Do-it-yourself survey research using free or low-cost tools from the likes of SurveyMonkey, Zoomerang, QuestionPro and others has been well discussed in industry publications, events and blogs. Attitudes toward DIY have ranged from viewing it as the big bad wolf to hailing it as the saviour of research. The view we now seem to have settled on is along the lines of: ‘It’s OK if used properly, for limited purposes.’ That’s only really a small step on from: ‘Garbage in, garbage out.’ This is precisely the wrong way to view DIY.

Let’s start with the assumptions on which these attitudes are founded. For a decade, DIY has been replacing the medium of paper and pencil surveys (and to some extent, email surveying) but not professional market research. However, in the course of that disruption to the paper products, lead pencil, and email storage industries, DIY research has illuminated an entirely different set of inefficiencies in the research industry.

But first let me address another assumption behind criticism of DIY.

“Plenty of garbage can flow in and out of research whether it’s DIY or not”

The majority of people in the market research industry (including DIY survey companies) are trained primarily in something other than survey methodology. There are a limited number of professors who specialise in this subfield of their respective disciplines and, of course, not everyone has a graduate degree studying under one of these scholars. That means most people learn how to do market research from other market researchers. This more or less informal nature of education in market research has led to numerous worst practices masquerading as the right way to do things. We have agree-disagree scales that invite acquiescence response bias (which holds untold numbers of correlations together like magic glue), scale lengths that defy human cognition, scale point labels that defy language, grid presentations that scream, ‘Fill this out quickly!’ rather than, ‘Take your time and think about this,’ and, of course, surveys that take 30 minutes or longer to complete. So plenty of garbage can flow in and out of research whether it’s DIY or not.

But questionnaire writing is just the beginning of the problem with the research supply chain. Many studies now feature response rates in the single digits, which is especially problematic when those respondents were themselves taken from a panel whose recruitment response rate was already in the teens. Some great researchers (my own Stanford PhD adviser included) can certainly demonstrate that data quality isn’t affected by response rates if the initial sampling frame is based on probability, and that’s true, especially when predicting elections or other matters of general public opinion. But what would the results look like if a phone- or address-based study tried to predict how many people sent a tweet or queried a search engine today? And if a probability-based sampling frame could actually predict those numbers, how long would it take to gather and make sense of the data? How much would it cost?

At the root of sampling problems in research is the public’s declining interest in taking surveys globally. Why do people need to take surveys to influence the products and issues they care about with so many communication tools available to them? If survey responding is a finite resource, why do two firms ask the same questions in a given month? We don’t have very good empirics on this because sampling tends to be detached from questionnaire design, and firms in the classic market research supply chain can’t keep track of the questions they ask each year. DIY firms, however, do store this information. So should agencies ask one more person yet another question or would they be willing to simply buy the results they need from one another agency, or from a DIY firm?

This is where DIY may look like the big bad wolf.

“DIY surveys don’t just reach a special group of people – they interview just about everyone”

As a group, DIY firms conducted more than 600 million interviews in 2010 and will easily eclipse that number this year. Each day for three weeks during the summer of 2010, SurveyMonkey invited a random subset of people who had just completed a survey on its system to answer one more question: “Do you approve or disapprove of the way Barack Obama is handling his job as President?” This is Gallup’s classic measure of presidential approval. A total of 87,000 people (a 46% response rate) from more than 8,300 of the country’s 19,000 cities responded, matching the results of Gallup’s RDD studies within the margin of error nearly every day, without the use of statistical weighting. DIY surveys don’t just reach a special group of people – they interview just about everyone, even market researchers (who are people too, despite what some screener questions suggest).

The people who create DIY surveys ask all sorts of questions, but our analysis shows that most of them are actually asking the same ones. In the categories of customer satisfaction, human resources, education, healthcare and training/event feedback, hundreds of thousands of surveys containing millions of survey questions consistently yield less than 75 unique topics in any one category.

This shows us that the focus should not be on DIY as a tool – plenty of companies offer survey-programming tools that the industry uses every day. Instead, the research community should focus on how to harness the tremendous information assets available to the DIY survey firms. Today, if someone wants to know what the best set of customer satisfaction questions is, they tend to rely on the classically trained market researcher or the smart guy from a big brand (who was invariably a market researcher himself at some point). Why not tap the wisdom of crowds on the supplyside, and find out what a hundred thousand survey makers consider to be important to them?

This is where DIY actually turns out to be the fairy godmother.

“Who is better equipped to sift through, make sense of and add value to all of the rich data available from DIY survey firms than market researchers?”

Who is better equipped to sift through, make sense of and add value to all of the rich data available from DIY survey firms than market researchers? The staff of DIY firms themselves certainly don’t have the resources to engage in this sort of value-add, and moreover DIY business models aren’t constructed to support big research and sales teams. This data needs to be sampled, analysed and reported – all skills that market researchers currently have – and then sold to businesses who need the information. In other words, there is an entire economy that could sit on top of the DIY model and respective platforms.

So despite all of the surveying they do, DIY companies are, for the most part, internet technology companies. Their adjacent competitors are not the likes of Ipsos and GfK.

The world of research has been buzzing about harnessing social networking for years now, and the major social networks certainly have plenty of scale that researchers would love to get their hands on. But how many members of sites like Facebook signed up so they could take surveys or answer questions from discussion moderators? And why would the site owners jeopardise its trust with users for us? Meanwhile, research has been ignoring the stepchild technology businesses in its own backyard: survey technology companies that interact with millions of people every week.

It seems like everyone in research is talking about DIY. Hopefully they’ll start talking with DIY soon.


Philip Garland is vice president of methodology at SurveyMonkey. He joined the company in 2009 with responsibility for ensuring its products produce the highest quality data. He was previously chief methodologist at online sample provider SSI

0 Comments