Blurred lines: Rethinking response integrity in the AI era
There was a time when “data quality” in research referred to a fairly straightforward set of concerns: fraud prevention, sampling bias, or inattentive respondents. But we’re now entering a more complicated chapter, one where the biggest challenges aren’t just bots or bad actors, but subtle shifts in how people interact with research itself.
Artificial intelligence is part of that shift. It’s not just transforming how research is conducted, it is beginning to shape how people respond. Whether it’s a participant using AI tools to enhance their answers, generate responses more quickly, or simulate emotions they believe a survey wants, we’re seeing a quiet blurring of the line between real and synthetic input. It’s not fraud in the traditional sense, but it does challenge how we define response authenticity, and it’s forcing us to ask harder questions about what quality means in this new context.
When responses feel real, but aren’t
We’ve long relied on validation checks, duplication flags, behavioural scoring and attention metrics as proxies for authenticity. But what happens when responses pass those checks and still feel, off?
That’s the ambiguity generative AI introduces. A survey answer might be technically sound, aligned with expected norms, and even “human” in tone, yet still lacks the depth, spontaneity or intent we seek. Increasingly, respondents are using AI to polish, speed up, or even generate their answers. Some see it as harmless optimisation. Others may not realise they’ve crossed a line. Either way, the integrity of the signal is muddied, and that has real consequences for the insights we deliver.
This presents a challenge for researchers: how do we preserve signals when the source of that signal is less clear than ever?
A broader view of data integrity
It’s tempting to treat this as a narrow problem of detection, a technical gap to be closed by better tools or new validation layers. But in reality, it’s a sign of something deeper: that quality is no longer just about filtering out bad actors, but about understanding the evolving dynamics of human-tech interaction.
People are bringing new behaviours, expectations and tools into the research environment. That’s not inherently negative, but it does require us to revisit some of our foundational assumptions.
We must now think about data quality as a spectrum, not a binary; it is not just whether a response is real or fake, valid or invalid, but how much context and care it carries. Did it come from someone engaged, reflective and thoughtful? Or did it come from an auto-complete, prompted by convenience? To ignore that nuance is to risk missing the very signals that make research meaningful.
Not an argument against innovation
To be clear, this is not a criticism of AI or synthetic methods. Used responsibly, they offer tremendous value, especially in accelerating insight generation, modelling underrepresented segments and expanding what’s possible in early-stage research. In fact, synthetic data may eventually help us detect or account for the kinds of AI-assisted inputs that are emerging now.
Synthetic extensions and modelled responses, when labeled clearly, can help researchers simulate, iterate and refine at scale, but the line between augmentation and distortion is thin. If we fail to distinguish between real and synthetic data, or treat them interchangeably without proper checks, we risk drawing conclusions based on assumptions we haven’t fully interrogated.
That’s why transparency matters. Stakeholders deserve to know how an insight was generated, what inputs informed it and where modelling played a role, not to limit innovation, but to ensure that decisions grounded in data are also grounded in trust.
Redefining the quality mandate
This moment calls for an expanded definition of quality, one that reflects not only accuracy and consistency, but sincerity and signal integrity. That may mean building new frameworks for evaluating engagement. It may mean investing in tooling that can differentiate between effortful human responses and AI-aided shortcuts. Or it may mean rethinking how we design surveys altogether, creating experiences that are compelling enough to invite real reflection, not just fast completion.
It also requires updating how we measure success. Traditional KPIs, like completion rates, cost-per-complete and turnaround time, don’t always reflect the true health of the data underneath. They reward volume and speed, which can be gamed, especially in environments where shortcuts are readily available. A more quality-centered view would incentivise thoughtfulness, not throughput, and elevate the role of attention, comprehension and intentionality in determining what “good data” looks like.
Waiting until data is delivered to evaluate quality is already a high-risk strategy. In an AI-influenced environment, it’s nearly untenable. We need to build safeguards upstream, in survey design, platform architecture and sampling protocols, that anticipate these emerging behaviours and help ensure we’re still capturing real human input.
A Call for Industry-Wide Dialogue
No single company can solve this alone, and no one should try to. These aren’t competitive questions, they’re foundational ones. If we want the research industry to remain credible, trustworthy, and decision-critical, we need to align on what quality looks like in this next phase.
That means sharing best practices. Being honest about limitations. Asking hard questions, even when the answers aren’t simple. Recognising that, as researchers, our value lies not just in gathering responses, but in understanding the conditions under which they’re meaningful.
This also means being open to new types of collaboration. Academics, data scientists, panel providers and technologists all have a role to play in ensuring that what we call “insight” continues to be rooted in truth. The more we can connect those dots across disciplines, the better equipped we’ll be to navigate the blurry terrain ahead.
James Snyder is vice-president, trust and safety at Cint

We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.
1 Comment
Peter Barton
56 minutes ago
AI is a urgent issue for MR in this era of surveillance capitalism
Like Reply Report