OPINION18 September 2019

Fixing bad surveys when design is out of your control

Data analytics Opinion Technology

Poorly designed surveys can be improved at the supplier level, from better matching of respondents to solving technical problems, writes JD Deitch.

person selecting an unhappy emoji face on a computer tablet

Punishing respondent experiences have plagued the market research industry for years. Steps toward improvement have often focused on improving survey design with a more user-friendly interface, shorter length, mobile compatibility, readability and more.

Still, bad surveys continue to make their way in front of respondents and, as consumer expectations for seamless and painless digital interactions rise, these bad experiences are not tolerated. We end up with high dropout rates, incompletes and poor data quality.

Why can’t questionnaire design keep up? In fact, those designing surveys are not the only ones who can fix the problem. There are specific issues with ‘bad’ surveys that suppliers are increasingly able to control.

Matching respondents with the right surveys

How often do people start a survey only to be disqualified or kicked out due to overquotas or – worse – asked again and again for the same demographic information, only to find out that survey is not a good fit? Great strides have been made in profiling, which now goes well beyond basic demographics to include a host of behaviours, intentions and product ownership. Suppliers can use all of these data points to help steer a potential respondent to the most appropriate survey for them. Using technology like automation and AI, we can more successfully hit the bullseye on this front.

Better matching of respondents to surveys can only be achieved if all this information is put to use in field through APIs. The data is useless unless they can be acted upon at the moment of truth. APIs create an environment to transmit this data unobtrusively, saving the user time and frustration. Technology will continue to play a big part in deploying profiling data to properly target respondents, improving the user experience and thus yielding more accurate data.

Taking bad surveys out of field

Sound drastic? It’s one clear way to fix a bad experience, sometimes even before it happens. We have the ability, in real time, to collect and analyse a vast number of data points that reveal how consumers feel about their research experience. Suppliers can capture basic indicators such as dropout rates and overquotas, while also gathering study data such as incidence rates and length of the questionnaire. Overlay this data with respondent-specific demographics, plus their own ratings of the experience, and we can get a fairly good picture of survey quality. If the data shows us an experience is dismal, we have the ability, using automation, to pull the survey out of field.

Data suggests that the biggest causes of bad experiences are not actually issues related to research design – rather,  they are technical or operational problems. Bad redirects, programming issues, quotas that are full but not closed, poorly implemented automation and router dumping are the principal culprits. We regularly bomb people into places where we know a bad experience awaits. Now we have the technical ability to keep this from ever happening, using data that is already available. Of course, on the other hand, we can use this same data to promote good experiences. Studies that have low dropout rates and high ratings can go to the front of the line.

Yes, survey design is a vital piece of the puzzle. There is a vast amount of evidence showing that bad designs yield bad data. But with all the technology and data we have at our fingertips in today’s digital ecosystem, there’s no need to wait for survey design to improve. As suppliers, we have the ability to improve respondent experiences from multiple angles.

JD Deitch is chief revenue officer at P2Sample

0 Comments