ADVERTISEMENT FEATURE13 November 2023

Combating fraud

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

Data analytics Features Impact Legal Privacy

Data quality will differentiate agencies and suppliers. By Judith Staig

picture of woman facing a robot

Declining data quality in online research is the industry’s worst-kept secret. Most researchers are aware that it is increasingly difficult to ensure data quality, with the central question being: “When you recruit somebody, how do you know who they really are, whether they have answered honestly, or that they are even human at all?”

This goes beyond problems that are easy to fix, such as removing speeders and flatliners – people rushing through surveys without really considering the answers. More complex and insidious issues include automated bots set up to answer surveys at scale and professional respondents – people who understand how surveys work and want to game the system.

Simon Glanville, managing director at Ronin, says: “We are getting hit with individuals trying to access a study multiple times, break encrypted links, or set up an automated response. We have had instances of all of these and been able to identify them, but we can’t be complacent.”

Don’t ask, don’t tell

From the perspective of respondents, it’s being driven by incentives. The pennies offered in online quant surveys become more attractive when you can fill out a survey multiple times by breaking encrypted links or by using automation. And the sometimes hundreds of pounds on offer for qualitative research with people in sectors such as finance or healthcare can encourage respondents to cheat. In business-to-business (B2B), it’s possible to check up on people via social media profiles or telephone checks. However, the problem is harder with business-to-consumer access panels, often because there are long chains of suppliers involved in providing the sample; this is not always transparent.

Glanville agrees that sample provenance is important. “For consumer, I could go out this morning and buy 1,000 interviews at about 70p per interview,” he says. “But I know that, when I look at the output, I would have to remove 70-80% of the data. That is a terrible state of affairs – and if you are removing that much data, you can’t even believe the 20% you are left with. It should almost come with a health warning.”

We can’t just blame respondents. We need to recognise how the industry has contributed to the problem. There is so much pressure on turnaround times and budgets that, if a supplier offers feasibility at the right price, it can be tempting not to ask too many questions about the source.

Additionally, younger researchers who weren’t trained in the days of widespread telephone research and paper-based table checking don’t necessarily have the experience of quality control, or the skills needed to spot data issues. Glanville says: “As an industry, we have missed the boat, collectively, in not imposing the kind of rigorous quality-control checks we had with telephone research on to online, and it’s now difficult to put the genie back in the bottle.”

Poor quality data erodes trust

On the client side, Kevin Woods, senior research director at Brand Finance, says bad experiences with data quality can erode trust in a supplier and put them at risk of damaging their relationships with their customers. Clients are relying on agencies and suppliers to take the burden of quality checks away from them. “Bots and people gaming the system are getting smarter and more plausible,” Woods says, “so you have to do the logic check to make sure answers make sense – not just within an isolated section of the survey, but all over.”

Steven Thomson, insight director at Brand Finance, gives an example of the sort of additional checks he expects suppliers to make: “Look at data by panel source. If there are outliers – a brand has 60% awareness, for example, but, on one source, it is only 20% – then you know there is a problem. It shouldn’t be down to us to spot this.”

Rigorous checks are needed

Agencies and suppliers must put rigorous procedures in place for sourcing samples and checking data early and often. Techniques such as piloting or soft launching surveys, and asking for phone numbers and following up with a proportion of respondents, can all help eliminate problems. Better survey design using techniques such as gamification can also help engage genuine respondents. There is also likely to be a role for artificial intelligence tools, particularly in checking open-ended responses – a key indicator of issues, but hard to check at scale.

Glanville takes a thorough approach to validating B2B respondents. “We don’t operate a panel, but we have invested in building our platform,” he says. “We always ask about future participation and keep those profiles, so we have about 200,000 verified people. You can verify effectively at the point of incentive payment, because that is when you are checking their details.”

Transparent communication builds trust

Data quality – and the way problems are dealt with – can differentiate agencies and suppliers. The most important factor in building trust is open and transparent communication, says Woods, who adds: “I would rather someone came back to me and said, ‘we can’t do it’ than ‘we’ll give you something and it won’t be very good’.”

On the client side, there is a growing understanding that these sorts of quality checks take time and cost money; if one supplier is considerably cheaper than another, it is important to investigate and understand why. However, Glanville believes more education is needed. “Sometimes, the people having the discussions about budget aren’t the people who understand about the process of data quality,” he says.

MRS is taking the lead in the UK on data quality and fraud, working with a number of international partners, including Ronin, as part of a non-commercial initiative, Global Data Quality. “I feel encouraged that the industry has now woken up to this,” says Glanville. “It has been a dirty secret for a while, but clients are now getting on board and understanding that you get what you pay for. It’s good to be talking about these things.”

This article was first published in the October 2023 issue of Impact.

1 Comment

8 months ago

This is a topic which re-surfaces with disappointing regularity, demonstrating that our sector is unwilling to address the issue, and thus implicitly accepts declining quality!! BUT - imho I do not believe this is solely due to incentives. I believe it is a much broader problem of (poor) engagement, which starts with the very first time a person receives an invitation to join a panel - let alone an invitation to participate in a survey! I have regularly posted on this topic in the past year (https://www.linkedin.com/posts/finn-raben_mrx-marketresearch-jargonbuster-activity-7128383211985887232-VGzP?utm_source=share&utm_medium=member_desktop) - and I think its time for us to take a good look in the mirror - our reputation as a sector depends on it.

Like Report