Measure for measure: What do brands think about data quality?

For clients, data quality is fast becoming a pressing issue. Some are starting to take a tougher approach. Liam Kay-McClean examines some of the key questions.

Glasses reflecting data

In the conversation around data quality in market research, industry bodies and agencies have been leading the charge towards higher standards. The launch of the Global Data Quality (GDQ) initiative by a number of leading market research representative organisations, including MRS in the UK and the Insights Association in the US, has underlined the work being done across the sector to bring data quality issues to the fore and to raise standards. MRS has also launched a Campaign for Better Data to provide guidance and training to help the sector strengthen and evolve the quality of the evidence it produces and reinforce public trust in research.

In addition, GDQ formed a Data Quality Excellence Pledge in March this year to set out quality standards for the industry, and the Insights Association has also been measuring data quality across the US insights industry, with benchmarks put in place to help provide a method of comparing and contrasting the performance of different panel companies. But despite the good work underway, the industry still has some way to go to solve the issue, or at least reduce its impact.

What do brands and end users think about the ongoing efforts to address data quality issues? Is the industry easing any concerns? And how high is it on the agenda? “The insight industry influences decisions that affect people’s lives,” says Patrick Alcantara, until recently director of compliance and insight at the Lending Standards Board. “Our credibility depends on reliable, accurate data and data quality should be at the heart of our practice.

“I have been fortunate to work for organisations that abide by high standards for working with data, but we can collectively be better at asking about the provenance and quality of the data that we are supplied.”

Sebastian Mitchinson, insight manager at HSBC, argues that data quality is a major factor for clients because it comes down to trust. “As clients, we need to trust in our supplier partners, research agencies along with panel providers. Should data quality issues arise, trust through this network of partners could be stretched. Trust can take time to earn but quickly lost.

“Data quality is so central to the confidence and credibility of research. Multi-million-dollar decisions must be based on reliable data. Initiatives like the GDQ should help to remove the current lack of transparency, such as buyers’ ability to differentiate vendors’ data quality. It should provide a level playing field with a recognised level of quality standard.”

Counting costs

Arguably the biggest barrier to better data quality is the cost. Too often, clients are keen to have high data quality but are unwilling to pay the price difference required. Joanne Pearson, global customer insight director at Jaguar Land Rover, says: “Clients have to be realistic about the cost of getting high-quality sample, using reputable accredited agencies, having the discussion at the procurement stage about sample sources, technical solutions to identifying and preventing bots or professional or spurious respondents.”

Pearson adds that the onus on costs is not solely on end users paying for data quality, but also on agencies using panel providers who maintain high quality standards who may not necessarily be the cheapest available. “Agencies need to invest in research and resources to continuously improve their choice of and use of panel providers who likewise need to keep their tech solutions one step ahead of those trying to exploit them.”

What can clients do?

Clients can use their considerable leverage in terms of who they hire to carry out research to force behaviour change where necessary. The Financial Times has made a deliberate decision to stop working with some panel providers over the issue of data quality, based on its own checks.

Jessica McCarthy, senior researcher at the publisher, explains: “The poor data quality made it increasingly difficult to have confidence in the resulting insights, which we ultimately feared would result in misguided recommendations to the business, as well as a negative impact on our team’s reputation internally.”

Some of the tactics the news organisation uses to check research data and survey responses includes sense checking data against other sources and internal benchmarks, building tools or AI to flag poor responses, using trick questions or ‘red herrings’ in surveys, and comparing data with internal knowledge of the market being researched, such as statistical significance thresholds and total population.

McCarthy says that The Financial Times probes quality assurance measures in place with vendors and sometimes asks to add its own, as well as, where feasible, seeking to carry out its own audits, such as observing CATI interviews. “Seeing resistance to these requests is a red flag to us and implies the vendor is not able to be transparent with us,” she says.

"Clients have to be realistic about the cost of getting high-quality sample"

McCarthy adds: “We can lean on examples of past experiences where (usually) cheaper providers have led to poor quality data. Ultimately, if a project is deemed worthwhile, it warrants investment in reliable, high-quality data. The alternative is to see what can be done without panel data, using proxy metrics.”

Sarah De Caux, lead analytics and insight manager at Co-op, says the company only works with a small number of trusted suppliers who have undergone a rigorous procurement process. “We do ask them about their data quality processes but all the flags around third-party panels in the industry press recently have really put this back on my radar,” De Caux says. “So, in common with everyone else , I am sure, our main concerns now are around AI and bots and data integrity.

“We’re fortunate as a member-owned organisation we can consult with our members directly, fulfilling a lot of our research requirements – that means we can be confident of data integrity. But once we start dipping into panels, I have significant concerns about how sound their data really is. It’s raised major concerns for us as it’s so foundational to any project – if you don’t trust that, everything else ebbs away. When we have been using panels recently, we’ve been asking agencies how they are quality checking their data, sussing out bots, what assurances they are asking for from panels they use, which ones they are using, and how they can give us confidence in the data they are providing.”

In addition, Co-op asks other questions around data checks and accuracy, often looking for accredited agencies and those with formal quality standards in place, according to De Caux.

Pearson says the industry could do more to address data quality, including accrediting panel providers, rating panel providers, only paying on results after checking rather than on completion, lobbying for fraud prosecutions for click farms and panel providers checking respondent credibility, such as their bank details.

Discussing Jaguar Land Rover’s approach to the issue in its own research, Pearson says: “We specify the sample criteria for our studies, and for many pieces of research we require proof of ownership of the vehicle that they say they own and are being recruited to participate in the research for.”

Procter & Gamble has strengthened its approach to data quality recently, having a requirement for all online quantitative sample suppliers to be ISO 20252 certified by July 2025. ISO 20252 broadly provides a framework for companies to meet standards in quality management across research, covering all stages of a research project. The ISO therefore acts as a baseline guarantee that Procter & Gamble’s expectations and requirements from sample providers are being met, with ISO certification requiring annual audits to retain.

Alexandrine de Montera, chief product officer at Full Circle Research and chair at the Certification Institute for Research Quality, which is a subgroup of the Insights Association providing audit and certification services, says that the requirement for ISO 20252 is positive. “I think it’s a good thing for the industry – everyone can say they are doing the right thing,” she says. “We have heard other brands that want to follow P&G’s steps into this.”

However, ISO accreditation is not a silver bullet. “You are also eliminating smaller players,” argues Ariane Claire, panel and research director at business-to-business sample company MyClearOpinion Insights Hub. “ISO certification is a timely, costly process that not every company can invest time and effort to do if that’s going to be the baseline. People need to have conversations around how the respondents are recruited, how are we actively managing the panel.”

Claire says that a bigger issue is that many sample requests are often focused on an arbitrary number of respondents, rather than on the research processes being carried out. “People think more is better, and that’s not necessarily the case. You don’t necessarily need 5,000 completes to tell you something is working. I know that number sounds fantastic, but if you’re segmenting outright and you’re doing the research in the right way, on a consumer level a few hundred could potentially suffice. On the B2B side, we could drill it down to 30 or even five qualitative interviews that would get you what you need. We somehow lost the vision of what makes good research, and in that, quality suffered.”

Who should take charge?

Who is responsible for data quality? Is it solely agencies, panel providers or end users to drive through the necessary improvements? Or is it some combination of the three, supported by industry bodies? How can industry standards make sure that research is of sufficient quality within brands’ insight functions?

Alex Owens, until recently head of the People Data Centre at Unilever, says users of data should seek to properly understand what it is they are working with, the limitations of a data set and what quality means in the context of the data provided. He says: “Defining quality according to the data set you are working with would then allow you to have the right expectations of the agency that is delivering that data to you. Expect panel data to have a bias, either, one, because of the people you’re speaking to, or two, the way you have weighted the data. As long as it is consistently inconsistent, it’s therefore useful as a relative measure.”

Owens adds that there are several points he has for tackling data quality. “One, accept every data has bias. Two, understand the bias within that data. Three, understand the hypotheses you are looking to validate through the data. Four, never use just one data set to tell your story. And five, wake up to the realisation that you have a smaller percentage of the population who are willing to participate in more traditional research, there accept and understand the limitations, but also the fact that ‘digital’ data can be far more robust and better reflect the population. We need to move away from the belief that by setting quotas and/or weighting traditional data that it is more representative – can a sample of 200 truly be?”

De Montera says that it is everyone’s duty, including clients, to pursue better data quality, but argues that industry standards will have the biggest impact. “Everyone is responsible for their part,” she says. “But the associations and the industry are responsible to define what needs to be done at each step – recruitment, pre-survey, during survey and post-survey – so it becomes a standard.”

McCarthy thinks transparency is the best route forward. “I believe there’s an urgent need for greater transparency from all panel providers, both large and specialist,” she says. “Simultaneously, clients need to be more proactive in these discussions and more rigorously investigate the methodologies behind sample construction – perhaps even developing their own quality assurance frameworks.”

“All research and insight professionals have a role to play,” argues Mitchinson. “It starts with the agencies and continues with their own suppliers such as panels. But buyers create the demand, and so clients such as HSBC certainly have a role to play. Agencies could argue that this standard setting may raise costs for buyers, something which may become easier to justify if additional value can be clearly demonstrated.”

Is there a case for optimism? Can the industry and clients work together to address data quality issues, and mitigate their longer-term impact? “We have the existing toolkit to address this issue such as independent regulation, professional standards, research buyer guides, accreditation schemes and training, among others,” says Alcantara. “However, we need to be more intentional in providing assurance, enabled by industry coming together in creating robust safeguards and raising the standard of practice in data and insight.”

Data quality, it seems, is likely to remain on the agenda for some time to come. But with clients increasingly aware of any issues, solutions are starting to emerge. The hard bit is in the implementation. 

We hope you enjoyed this article.
Research Live is published by MRS.

The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.

Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.

For example, there's an archive of winning case studies from over a decade of MRS Awards.

Find out more about the benefits of joining MRS here.

0 Comments


Display name

Email

Join the discussion

Newsletter
Stay connected with the latest insights and trends...
Sign Up
Latest From MRS

Our latest training courses

Our new 2025 training programme is now launched as part of the development offered within the MRS Global Insight Academy

See all training

Specialist conferences

Our one-day conferences cover topics including CX and UX, Semiotics, B2B, Finance, AI and Leaders' Forums.

See all conferences

MRS reports on AI

MRS has published a three-part series on how generative AI is impacting the research sector, including synthetic respondents and challenges to adoption.

See the reports

Progress faster...
with MRS 
membership

Mentoring

CPD/recognition

Webinars

Codeline

Discounts