FEATURE1 December 2011

How was it for you?

Features

Surveys can be slick, interesting and enjoyable. But we all know how often they actually are. Robert Bain considers the state of respondent experience and whether it’s improving.

Are surveys dying? It’s one of the perennial navel-gazing questions we all get used to hearing in this industry. But when clientside researchers start asking it might be time to sit up and take notice. At Esomar Congress in September Joan Lewis of Procter & Gamble asked for help from agencies in moving away from survey research and adopting other methods. She’s got nothing against surveys, she hastens to add – it’s consumers that don’t want to do them.

Take some surveys yourself and you’ll understand why. Undoubtedly a lot of hard work and innovative thinking goes into survey design, but the quality is still extremely variable, and the mistakes that make the difference between good and infuriating are far too common. There are lots of experiments with more interactive approaches, but not enough to reduce the sea of junk. And the problem is not just cheap DIY efforts – agencies and clients who should know better continue to spam respondents with dull, lengthy and poorly thought-out questionnaires too.

“People all over the world are creating, sharing and completing online surveys, unprompted and without incentives from companies”

This is odd, because in the broader worlds of technology and commerce, user experience design is a strong contender for the title of ‘the new rock n’ roll’. But in survey design, complacency continues to manifest itself in basic errors of logic, waffly questions, nonsensical scales, endless grids and industry jargon. A comparison between the effort and rigour that goes into surveys, and that which goes into the brand communications that they support, tells you everything you need to know.

Techniques like online communities that encourage more involvement from respondents – or rather participants – can go some way to breaking the deadlock. But research suggests it is wrong to assume communities are engaging per se, and young people may turn up their noses at the idea of joining communities just to talk about brands.

Not only is the survey experience often not up to scratch, the respondents who put up with it don’t always feel their contribution is sufficiently valued. The problems page of The Observer recently featured a letter from ‘DT’ of Macclesfield, a Toluna panel member for more than six years who told of racking up a substantial pile of reward points, only to find one day that all £270-worth had vanished.

Toluna warns its panellists that points expire after a certain time, but in this case the warnings hadn’t got through. Once The Observer got on the case, Toluna agreed to give back £56.25 in points and vouchers, representing rewards lost after it shortened the time limit last year – but that’s all. It’s hard to see DT’s evident enthusiasm for online surveys enduring now.

Toluna says it values its members and is upfront about its policies – and that DT’s experience seems to have been unique. “With millions of community members, we know we can’t please everyone all the time,” UK managing director Mark Simon told Research.

What’s clear is that research companies can do more to make sure the people from whose information they make their livings are kept happy. Industry codes aren’t much help in this area. The MRS Code of Conduct, for instance, requires researchers to ensure the data collection process is “fit for purpose” and that the design of the data collection process is “appropriate for the audience being researched”. It’s strict about protecting respondents from harm, but doesn’t have much to say about the quality of the experience – that’s left pretty much up to the researcher. Even so, there are a number of reasons to be optimistic about the future of respondent experience.

Positive signs
The first thing worth noting is that the idea that people just can’t be bothered to answer survey questions doesn’t hold water. People all over the world are creating, sharing and completing online surveys, unprompted and without incentives from companies. SurveyMonkey is used to collect about 33 million responses every month – and it’s just one of numerous DIY survey tools. Toluna and uSamp have also moved into the DIY space with free survey tools, while startups Wayin and Opinionaided are seeking to combine Q&A tools with social media. The fact is, people like asking and answering questions.

Another ray of hope comes with the rise of gamification. A year ago when Research picked out ten trends that would shape the industry in 2011, gamification wasn’t among them. It was on our radar, but not yet a significant influence on research thinking.

Now some researchers seem to talk of little else. At Research 2011 in March there were two workshops that involved ‘gamifying’ research techniques, and Esomar Congress hosted no fewer than three presentations on the subject. Companies including Market Strategies International, GMI and Engage have run experiments on ways of applying game mechanics to research, and Nebu’s former UK account manager Betty Adamou has started a company to focus specifically on research through gaming.

The reason this is such an encouraging development is that it has the respondent at its heart. To borrow the language of Deborah Sleep and Jon Puleston (of Engage and GMI respectively), it’s not just about making surveys more like games, it’s about making them more personal, fun and human. Combined with the growing interest in behavioural economics, gamification holds the promise of surveys that take better account of the human element.

The other thing about gamification is that it puts respondent experience on a positive footing. Instead of taking things from poor to good-enough, its aim is to take things from good to great – to create surveys that people will enjoy rather than just tolerate. It reminds us that surveys aren’t so bad after all – not if they’re done well.

Further hope comes from the focus on respondent experience among sample providers. SSI rates surveys using respondent satisfaction scores looking at factors including design, ease of use, respect for respondents’ time, opportunity to learn and ‘fun’. It also runs an award scheme to honour the companies that provide the best experience – China’s Data100 won this year’s top prize, with regional awards also going to US-based InfoTrends for the Americas and UK-based Opinion Matters for Europe.

MarketTools has also sought to quantify the quality of surveys, using data provided by the survey designer on how long a survey is, how long the questions are and how many grids it includes. The company’s research-on-research suggest that poor respondent engagement and high levels of survey abandonment have significant effects on the reliability of data and predictions.

In August, Research Rockstar introduced its SurveyGrader tool, which founder Kathryn Korostoff describes as a “sanity check”. Most of those using it, she says, are either relatively new to research or don’t work in research full time. But professionals and amateurs alike are prone to making surveys too long, or forgetting to take their audience into consideration when composing questions and answers.

“So many market researchers cry about people using DIY research tools who don’t know what they’re doing, but I see far more egregious survey designs coming from supposedly professional researchers”

Kathryn Korostoff, Research Rockstar

“I’d say 60% of the professional surveys I see are quite good,” says Korostoff. “Unfortunately I’d say 40% are really pretty awful. What irks me is that so many professional market researchers are always crying about people using DIY research tools who don’t know what they’re doing, but I see far more egregious survey designs coming from supposedly professional market researchers. I’ve never seen a DIY market researcher force somebody through 20 pages of grids.

“In an ideal world, everyone would take weeks of professional training before they did survey design, but we don’t live in an ideal world. I do believe we can teach people the basics in a relatively short amount of time, we just have to raise awareness.”

Respondents are people
For an industry that makes its living telling other companies how to treat their customers, researchers should really be the experts in this sort of thing. David Conway, strategy director of Nunwood, runs regular surveys looking at customer experience among businesses in the US and the UK. So what does he think about the respondent experience provided by his own industry? “My experience would be that respondents are a means to an end,” he says. “Research agencies are very good at managing their client relationships, but perhaps not very good at thinking about what it feels like to be on the other end of the scale, providing this sort
of information.”

For its part, Nunwood is experimenting with various ways of engaging respondents more effectively, creating ‘sexy surveys’ and trying out new approaches to qual research, including online methods.

“I think there’s a general trend to think more about the role of the respondent and how you get more quality information out of them,” says Conway. “If someone’s relaxed and emotionally engaged, they’re going to give you more than if they’re just trying to get to the end of the survey. On the other side of it, it’s the takeaway – what is the respondent left with at the end? Can they print off a copy or be signposted to where that report will appear? It’s not always possible but sometimes it might be. I’m sure there’s an enormous amount more the industry could do if it focused attention on it.”


How to get it right

For some companies good respondent experiences come naturally. Doctors.net.uk, whose membership includes about nine out of ten doctors in the UK, has a research division, MedeConnect, which runs its own panel. The experience that members have with research has to be up to scratch, because the pool of doctors is so limited and their time is so valuable.

As a result, managing director Anna Garofalo takes a tough line on complex grid questions. “We tend to push back heavily with our clients on that sort of thing,” she told Research. “Where we stand is: keep it short, keep it direct,” Garofalo says. “Think about it from a respondent perspective. Would you want to answer a bunch of questions that are complex and demanding, that require mental gymnastics in order to get the information? If the answer is no, you need to think differently about the way you approach design.”

“Think about it from a respondent perspective. Would you want to answer a bunch of questions that require mental gymnastics?”

Anna Garofalo, MedeConnect

It’s a similar story at Mumsnet, the huge online community of parents, which also runs a panel for research and product testing. Members love being asked questions, says founder Carrie Longton, but they have no patience with boring surveys or endless grids.

Garofalo emphasises that such discipline doesn’t have to be a bad thing. “If I were to ask you about the specific detail of what you plan to do in the next three to six months at a level of granularity that you can’t get your head round, it’s pointless asking. But if I ask you in a way that’s more realistic and goes to a sufficient level of detail – but not too far – that’s going to be more reliable. The more levels of granularity you add, the greater degree of error you introduce to your findings.”

As well as data quality, it’s a question of looking after a valued community. About 40% of Doctors.net.uk’s 188,000 members are opted in to take part in research, and “their experience with the Doctors.net.uk website is front and centre,” says Garofalo.

“If there’s anything that detracts from that – and that includes the research we do – we have to be incredibly careful.”