FEATURE30 March 2012

Putty in their hands

Affinnova’s Jeffrey Henning looks at crowd-shaped surveys, which evolve in response to what respondents say and do.

Res_4007172_putty_wall_458

Traditionally it doesn’t matter whether you are the first or last respondent to take a survey. What you see is a function of how you answer the questions. But an increasing number of surveys use input from the crowd to shape the experience of subsequent respondents, so what the last person sees is shaped by how the first person answered.

This is a new frontier for expansion of online surveys and represents a fertile field of exploration for survey computing professionals.

Crowd-shaped survey designs range from qualitative to quantitative. Let’s consider some of the possible techniques.

Question buckets
From a respondent’s perspective, most questionnaires are too long. From a survey analyst’s perspective, most open-ended questions have too many answers to go through.

For one study, we solved both problems by implementing question buckets. We arranged four open-ended questions in the order of importance to us. Rather than display only the question with the fewest responses so far, we repeatedly asked the most important question until it had gathered 100 responses. Once that bucket was full, we automatically closed the question and asked subsequent respondents the next-most important question. We repeated the process for each question.

Question buckets mean later respondents see different questions because of the actions of earlier respondents, although they aren’t responding to prior input.

The approach ensured that we got answers to the most pressing questions, while leaving room to capture the nice-to-have questions if we received enough sample. A sample size of 100 per question can be sufficient for some types of verbatim analysis, especially where you are looking for some colour to supplement the closed questions.

Crowd-sourced choice lists
A particular challenge when writing closed-ended questions is enumerating all the common choices. Alternatively, a crowd-sourced choice list adapts to reflect responses. It can start out like a conventional ‘choose one’ question, presenting a list of options decided by the questionnaire author and an open-ended ‘other’ option. Or it can start out with just an open entry field.

“An increasing number of surveys use input from the crowd to shape the experience of subsequent respondents, so what the last person sees is shaped by how the first person answered”

Unlike in a traditional online survey where the ‘other – please specify’ responses are simply stored in the database, a survey using crowd-sourced choice lists adds every choice typed in by a respondent to the list. Respondents can choose from selections entered by earlier respondents, so if I type ‘Earl Grey tea’ in answer to a question about favourite drinks, other respondents will be offered that option in the list. If some of them pick it, it will keep appearing for later respondents. If they don’t, it will drop out. As more choices accumulate, only the most frequently selected choices are presented. The software chooses which choices to display according to probabilistic Bayesian scores.

Crowd-sourced choice lists don’t remove the potential for bias from standard scripted choice lists – in fact they introduce their own sources of bias. The answers entered by early respondents may dramatically shape subsequent selections.

These types of lists are best for teasing out the language most commonly used by respondents. They are well suited to pilot questionnaires, providing the author the data to create a scripted closed-end question for the next round of the survey.

Crowd-sourced laddering
Where crowd-sourced choice lists really come into their own is in laddering. In qualitative interviewing, the laddering technique involves continually probing on answers to open-ended questions in order to move discussion from features to benefits and from benefits to emotions. This approach is particularly difficult to automate.

BrainJuicer has come up with a solution to this called MindReader. The first respondent is asked to provide a number of examples. For instance a respondent might be asked about their mobile phone: “What three features are most important to you?” Subsequent respondents see the most frequently selected past choices and may enter their own.

This is done for each question. After a top-level association is recorded the system prompts the respondent for follow-on associations. For instance, if the respondent had answered “checking Facebook” then a follow-up question would be “Thinking about your mobile phone and ‘checking Facebook’, what comes to mind if I say to you, ‘keeping up with friends’?”

Developing a tree of such choices would be a tedious experience for the questionnaire author and would no doubt introduce significant instrument bias. Crowd-shaped laddering brings this qualitative technique to online surveys with style.

Comment evaluation
Rather than simply have later respondents select from earlier respondents’ answers, another crowd-shaped survey approach involves having respondents rate those answers.

GMI offers a type of online survey question called Consensus, which provides for collaborative evaluation of open-ended statements. Respondents review other respondents’ comments and select and rate those they particularly agree or disagree with. The tool supports binary ratings (‘I agree with this’ v ‘Hey I don’t agree!’) and star ratings.

The problem with any survey system that echoes earlier respondents’ input to later respondents is the possibility of showing later respondents questionable content: misspellings, inaccuracies, libel, obscenities and so on. To prevent this, the GMI Consensus question supports the option for moderation, so that a researcher or the end client can approve each comment before it is shown to subsequent respondents.

According to Jon Puleston, vice president of innovation for GMI, the company has observed some general benefits to the Consensus question. Analysis of open-ended comments is accelerated and streamlined, being in effect outsourced to respondents. The resultant analysis provides a nice qual-quant hybrid.

Concept optimisation
Most concept testing techniques, such as monadic surveys or prediction markets, can evaluate only a handful of ideas. By combining permutations of attributes and levels, concept optimisation creates an innovation space of far more ideas: thousands, millions or even billions of potential concepts.

“Consumers choose from concepts based on choices made by those who came before them. they are unknowingly collaborating with each other to converge on the top concepts”

Jeffrey Henning

Jeffrey Henning

An innovation space is the set of all possible concepts that can be produced by combining every variant of every element. As a simple example, imagine a new product concept with three elements, each with 10 variants. For instance, 10 potential names, 10 potential positioning statements and 10 potential varieties would form an innovation space of 1,000 concepts.

How to identify the top concepts in an innovation space? In mathematics the problem is called combinatorial optimisation, and the typical solution is to use a search algorithm or metaheuristic. Affinnova uses an evolutionary search algorithm for its concept optimisation.

The first respondent sees random concepts (designed to be representative of the innovation space). Respondent preferences of one concept over another then inform the search algorithm. The second respondent sees randomly chosen concepts as well as some that are influenced slightly by the prior respondent. As hundreds of respondents complete the survey the concepts shown to respondents converge on a subset of high-performing concepts. By about the 450th respondent, preferences have converged on three or four top concepts, which are the best ideas in the innovation space.

Instead of reacting to handpicked concepts, consumers choose from algorithmically selected concepts based on the choices made by prior consumers. In essence, consumers are unknowingly collaborating with each other to rapidly converge on the top concepts within very large innovation spaces.

Crowd-sourced questionnaires
Pat Molloy, chief strategy officer of Confirmit, proposed the logical extension of crowd-shaped surveys: questionnaires composed by participants. “Traditionally we are in the ‘We have the questions, give us the answers’ mode. That is a bit tired to say the least,” he said.

Instead, imagine recruiting a small bulletin board focus group or tapping into an existing market research community and starting a collaborative exercise to build a questionnaire. You specify the market opportunity and ask, ‘If we were going to ask a larger group of people what they think about this, what questions would you ask that are the most relevant or interesting?’.

The community would seed ideas for the questions and would later vote on these ideas. The result would be an ordered list of questions, ranked on importance or relevance, and those questions would form the basis for more structured research among a large sample.

“There are all sorts of issues and problems with this which would need some professional intervention to fix (leading questions, biased questions etc), but if executed well it could deliver real value,” says Molloy. “It’s in the realm of adaptive questionnaires and co-creation, quali-quant and voting/ranking. It steals ideas from different trends and attempts to merge them.”

Jeffrey Henning (@jhenning) pioneered the enterprise feedback management industry – and the #MRX community on Twitter. He works for Affinnova.

4 Comments

12 years ago

Interesting article Jeffrey, and it sparks a few ideas. One question for you though - in the collaborative approach, how do you ensure the quality of responses that later respondents assess? As you say, if you enter "Earl Grey Tea", later respondents have that as an option, but what if you enter "eArl greay T" ? Is it much more labour-intensive due to having to moderate the questionnaire as it develops?

Like Report

12 years ago

I may have missed it...how is this different from intelligent conjoint?

Like Report

12 years ago

Hi Jeffrey - under the concept testing algorithm, to what extent to the first 10th of the sample respondents impact the remainder? Surely they could swing the entire concept early on, and the remaining sample fine tunes a concept which possible outliers have formed? This approach does require that response rates over the key demographics are managed with some sort of algorithm as well, as first/early online respondents are substantially different from late? (this couldn't be done face to face). Thanks - Ryan.

Like Report

12 years ago

Nick, moderating responses to verbatim questions definitely adds a layer of labor to the fielding of a survey but is helpful for fixing typos and for preventing inappropriate comments from being published to later participants. Dan, which variety of conjoint do you mean? Conjoint and evolutionary search are quite different and have their own strengths and weaknesses: Conjoint simulates the entire innovation space of possible concepts -- the evolutionary search algorithm seeks out top concepts but can't simulate the entire space. Ryan, with evolutionary search, you are right that you need to keep the sample demographics in balance throughout fielding. Evolutionary search starts with a parent population designed as a sample of the entire innovation space of possible concepts -- each concept selected by a respondent gets added to this starting population of parents so it takes more than 10% of the respondents to significantly change this population. When parent concepts are combined to form a concept to evaluate, they are sometimes combined with a rate of random mutation -- selecting another element for a particular attribute. This is done to find near-by concepts that might be better while preventing lock-in based on the first responses.

Like Report