Hedge-maze-RL-26
Sponsor
Special Report

Are we losing our research participants?

Participants in research today often feel overburdened, undervalued or simply disengaged. Is there appetite for real change, or will the industry continue to lose its lifeblood? By Ben Bold.

Market research has always relied most heavily on a single resource: people. The individuals who take part in surveys, join focus groups, or share their opinions with researchers are the lifeblood of the industry.

Yet, despite decades of methodological refinement, there’s growing consensus that many participants today feel overburdened, undervalued, or simply disengaged. Attention spans are shrinking, digital experiences are evolving rapidly, and participant expectations are shifting faster than many research practitioners can keep up.

In 2022, Research Live noted increasing anecdotal evidence that participants in market research find the process tiresome and can feel confused or misled about what is expected of them, particularly in online quantitative surveys.

Meanwhile, the latest GRIT report suggested that one of the biggest challenges faced by the industry is “accessing respondents in ways that assure data integrity.”

For Jon Puleston, chief methodologist at Ipsos, falling engagement is far from a new trend. “It’s been worsening since 1964,” he says, noting that engagement suffers every time the industry invents a new form of research.

It is clear that technology has brought efficiency gains that have benefited the industry. But there is a question over whether providers are reinvesting these gains into better respondent experiences – or just extracting margin.

Lewis Reeves, founder and chief executive of Walr, draws an interesting parallel, citing comments made recently by Hollywood star Matt Damon, on the ‘Joe Rogan Experience’ podcast, about Netflix and how the streaming platform understands that its audience at home are watching often while on second screens. This has led to shorter takes that appeal to shortened attention spans.

“In the first five minutes, you have a massive action scene, and you have to repeat the plot three to four times within the opening 45 minutes, just to make sure that people are paying attention,” Reeves says. “For good or bad, that’s the world that we live in.

“Yet in market research, we're often still running surveys exactly the same way as we did 20 years ago.”

 

Woman-looking-at-phone-RL-26

The engagement challenge faced by the industry is multi-layered. Participants cite survey fatigue, excessive screening questions, repetitive formats and a lack of connection to research purpose as common pain points.

Luisa Gibbons, global head of innovation strategy at MMR, notes that while online survey infrastructure has remained largely static over the past decade, cultural and technological shifts have transformed participant expectations. “People say it’s a Gen Z thing, but the rise of digital experiences and the evolution of social media has changed how people expect to engage with content.”

In short, people’s expectations have outpaced the means of capturing their feedback. While repetition and consistency ensure methodological rigour, they can make surveys feel formulaic.

Reeves observes that even highly experienced participants, who may take part in a plethora of surveys per year, can experience diminishing engagement after little time. “If you're asking an important question after 15 minutes, it’s pretty 50-50 what response you're going to get,” he says.

Similarly, Eddie O’Brien, senior director of global customer insight at Sage, highlights the cognitive load of traditional surveys. He says: “You get a survey and you’ve got 10 screening questions and then a 15-minute survey that is scale-based and choice-based. That’s a big ask of any individual.”

For ‘professional respondents’, engagement can remain high with careful management and incentives, but casual participants can disengage, undermining completion rates and data quality.

Default-RL25-avatar-small

“If you're asking an important question after 15 minutes, it’s pretty 50-50 what response you're going to get.”

Lewis Reeves, Walr


Survey fatigue

Justine Yorke, chief operations officer at MMR, also argues that the burden of overly complex screening can discourage engagement from the outset. “When you’ve spent 10 minutes qualifying for a survey, it’s like, ‘How is this worth the effort? I just want to earn enough money to buy a takeaway’,” she says.

Respondent fatigue is closely tied to the perceived value of participation. Participants are acutely aware of how their data is used and want reassurance that their input is meaningful.

Gibbons stresses that such value is not solely financial. “The level of effort required for respondents to give feedback shouldn’t outweigh the perceived reward from taking part,” she says.

Default-RL25-avatar-small

“The level of effort required for respondents to give feedback shouldn’t outweigh the perceived reward from taking part.”

Luisa GibbonsMMR

That reward can be monetary, but also the mental satisfaction of feeling valued. Well-designed surveys that respect participants’ time, allow meaningful responses, and maintain a conversational tone are more likely to sustain engagement and reduce fatigue.

Reeves echoes this sentiment, noting that opaque industry practices can reduce motivation. Without transparency and thoughtful design, research risks commoditisation, diminishing incentives for participants to invest effort and attention.

“It’s critically important for both the respondent and the research that there is absolute clarity on the supply chain,” he says.

“When the respondent clicks on a survey opportunity with a time and incentive, is this where they are going? How quickly will they get paid, etc.? For the researcher it’s important to understand where has the respondent come from, how were they recruited, how will they get incentivised, what validation metrics have they passed, etc.” 

 

Hedge-maze-RL-26

Personalisation and human connection

A recurring theme among contributors to this reportis the power of personalisation and the human touch. Gibbons highlights that giving participants a sense that their input is being heard can dramatically improve engagement. This could include thinking about how to tailor the experience more to the participants and make it feel a bit more personalised – “even if just through clever filtering or conversational AI,” says Gibbons.

“Just making the experience feel a bit more reciprocal, that their opinion is being valued, that the responses they're providing are being listened to using cues and the ways in which we can communicate to them throughout the survey; as well as incorporating other tools and giving them options on how they compete and share their feedback.”

While survey fatigue and complex screening persist, the industry is exploring how participant experience can be improved through design and technology.

Puleston emphasises that survey design remains a craft, even in an AI era. “We’re recognising that writing a survey is akin to writing any form of communication; it’s a creative skill that people can learn,” he says. “But it requires training – there’s a lot of experience required to understand how to craft a good survey, how to make surveys feel more conversational in style, to have a good strong narrative. You can build a really good survey around a good question that people want to answer.”

Puleston cautions against over-reliance on large language models (LLMs) for survey creation. “One of the most dangerous things you can do is ask an LLM to create your survey from scratch. It reproduces 50 years of bad survey practice in the language and tone we’ve developed over the years, which is very earnest and can feel like being interrogated by a lawyer.”

Default-RL25-avatar-small

“One of the most dangerous things you can do is ask an LLM to create your survey from scratch.”

Jon Puleston, Ipsos

 
But used wisely, AI can support efficiency, helping researchers produce better surveys faster while raising industry standards. Incentives that align effort with reward also improve engagement. “Reward people properly for giving thoughtful answers, and reward the truth. Instead of saying, ‘Here’s £1 for doing a survey,’ you get to the end and say, ‘Oh, you gave us really good answers, here’s a 50p bonus and if you carry on giving good answers, you're going to earn more money.’ The better your answer is, the more money you're going to earn,” Puleston explains.

Cultural shifts and the digital landscape

MMR’s Gibbons and Yorke emphasise that digital experiences and social media have raised expectations for interactivity and personalisation.

“Online surveys haven't evolved much in the last 10 to 15 years,” Gibbons says. “If we're brutally honest, there’s been tweaks around the edges, but a lot of the process, the structure, the consumer experience hasn't really shifted dramatically during that time.”

Reduced attention spans, long surveys, and complex qualification screens exacerbate engagement challenges. MMR advocates for personalisation and conversational design.

Balancing efficiency with insight

AI is often positioned as a tool for efficiency, but Gibbons says that its true value lies in engagement and unlocking participant stories.

“A lot of focus on the benefits of AI have really honed in on efficiency, being able to automate manual tasks, being able to deliver insights quicker and cheaper,” she says. 

“However, where we really see a huge benefit is allowing us to connect with consumers in more engaging ways, in more personalised ways, but also getting closer to the moment of truth – using it as a way to better unlock consumers’ individual stories and harness them at scale so that we are getting more granular, richer, more nuanced understanding of their priorities, their feedback, but in a way that can be delivered efficiently.”

O’Brien posits that, regardless of whether participants know it, their online behaviour is monitored, which means in effect means they are being “agent-ified”.

“We’re in a world where most people now are digitally native and spending a lot of their time using things like ChatGPT,” he says. “That tool knows who you are and knows a lot about you.”

This means the need for laborious screening questions can be eliminated, and questions can be asked in the digital ecosystem more easily. O’Brien says: “Rather than you having to spend 20 minutes answering a questionnaire and then going through a long survey, one of the ways we can solve this is moving towards a much more creative, gamified way to collect data and insight.

“By asking short, sharp questions, you can still stitch the data together and collect the data but you can cut the data in different ways, because you'll have one data set.”

Default-RL25-avatar-small

“I just wonder, how might the research industry think about reinventing how it collects data?”

Eddie O’Brien, Sage

 
However, Reeves warns against assuming AI can replace thoughtful and nuanced human design. “The reality is that we do adapt, and we do incorporate new things, but we just don't shift that rapidly, for good or for bad. We’ve got to meet the market where it is.”

Data quality and participant experience

Engagement and data quality are inseparable. O’Brien notes the risks of diminished attention in traditional quantitative methods: traditional surveys – long, complex and impersonal – struggle to capture meaningful responses, he says.

He estimates that in some cases, between 30% and 40% of quantitative data may be unreliable. “But big companies are finding that LLMs are at worst 70% accurate and at best 90% accurate relative to real-world data.”

Yorke flags the proliferation of bots and poor-quality online panels. Yorke says: “We need to be mindful of the impact of AI on quality. You need to stay one, two, three, four, five steps ahead to understand what is real and what is not.”

Default-RL25-avatar-small

“We need to be mindful of the impact of AI on quality...You need to stay one, two, three, four, five steps ahead to understand what is real and what is not.”

Justine YorkeMMR

But the industry faces a balancing act: increasing efficiency and reducing friction while safeguarding methodological rigour.

The MRS-backed Global Data Quality’s Benchmarking Project, which collected data between January and June 2025 spanning 51 companies, 78 countries and around 2m records, found that research agencies are reporting fraud removal of 9.4% of respondents pre-survey and in-survey, while suppliers are removing 13.7%.

Participant experience and ethics

All those Research Live spoke to agreed that the participant experience must respect their time and contribution. Complicated screening, repetitive questions and opaque survey purposes erode trust.

Gibbons emphasises that even small improvements, such as conversational language, personalised flows and clear communication of survey purpose, can significantly improve the experience.

The solution lies in blending human oversight with AI and hybrid methods: short, gamified interactions for participants, supported by rigorous methodological checks behind the scenes. 

 

Person-looking-at-layouts-RL26

Commoditisation and trust

The participant experience is also affected by the commoditisation of research access. Some feel that the proliferation of panel providers and competitive pressures have driven prices down, sometimes at the expense of quality. 

It’s a point that resonates with Walr’s Reeves. “Indeed, it’s true that the cost of sample has reduced over the years,” he says. “However there has been a lot of innovation in incentives which offsets this equation. What gives me a lot of confidence, thanks to our research on 20m completed interviews last year, is lack of engagement correlates most closely with research design, question type, topic and survey length, significantly more so than incentive. This means we as an industry can make a difference.”

Understanding participant motivations, ensuring data security and clearly communicating research purpose are central to both ethical practice and sustainable engagement.

Reeves, Puleston and O’Brien all stress that these principles are not abstract ideals – they directly affect participant response volume and quality, and therefore the integrity of research, and the research industry.

AI, gamification and keeping humans in the loop

All the interviewees cautioned that technology must complement, not replace, human understanding.

Reeves stresses: “We do adapt, but we don’t shift rapidly enough. Meeting the market where it is requires understanding people first, and technology second.”

Default-RL25-avatar-small

“We do adapt, but we don’t shift rapidly enough. Meeting the market where it is requires understanding people first, and technology second.”

Lewis Reeves, Walr


Indeed, there’s a wariness around technology among buyers. Global Research Business Network’s Online Buyers Sentiment Survey (August 2025 ) examined the opinions of online sample buyers, finding that a third of buyers have no opinion on the impact of AI on sample quality; another third perceive the impact as negative.

What’s required is a synthesis of the traditional and an embrace of the new – a hybrid approach. 

“Sometimes it doesn't look like the industry’s necessarily got its stuff together,” O’Brien notes. “Because on one hand, before AI came along, qual was fighting quant research. For me, I look at qual/quant research, digital analytics and different sources of data to get one single version of the truth. And they draw upon everything.”

Future-proofing the industry

For clients, understanding the limitations of traditional methods and embracing hybrid approaches – including AI, gamification and multi-source data integration – can improve both engagement and insight.

Reeves notes that AI has “drastically” upped the speed at which businesses can bring something brand new to market. “You can have something that is game-changing tomorrow and a different game-changer the next day,” he says.

“Speaking of the benefits when we talk about people, it can do great things for making processes smoother, customising experiences. But when we're in a world where we pay people anonymously online, the way in which we verify them has become more difficult in a world of AI and agents, and so that’s something we do really have to face.”

“Market research is run on people, on the great people of the industry, the people at the MRS, to the fabric of our colleagues across the industry.”

And, of course, research participants. Professional research standards, championed by organisations such as the MRS, should provide a framework for safeguarding this resource while embracing technological innovation.


 

Hand-peeking-over-maze-hedge-RL26

Putting participants at the centre

There’s a common thread running through the views of those Research Live spoke to: participants must be treated as partners, not a commodity. Their experience has ramifications for data quality, drives insight, and ultimately defines the value of the research industry. Shorter attention spans, digital proliferation and AI disruption present challenges – but also opportunities for creative, ethical and professionally grounded engagement.

By combining methodological rigour with innovation, the industry can keep humans at the heart of research, ensuring that insights remain robust, reliable and relevant.

While, anecdotally at least, there’s consensus that concerns raised in Research Live’s 2022 article have become more entrenched, its call to action is as relevant as ever: surveys should be designed with empathy, show more respect to respondents, be interesting, inclusive and – above all – engaging.

By prioritising this, the industry can future-proof itself – ensuring that real people, not just data points, remain central to market research.

As Reeves notes: “If Hollywood can evolve using data, we must learn from that. Our participants are the most precious resource, and their experience determines the quality of our insights. If people aren’t willing to participate, we won’t be great. Hollywood has billions and adapts fast – maybe we should, too.

We hope you enjoyed this article.
Research Live is published by MRS.

The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.

Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.

For example, there's an archive of winning case studies from over a decade of MRS Awards.

Find out more about the benefits of joining MRS here.

0 Comments


Display name

Email

Join the discussion

Newsletter
Stay connected with the latest insights and trends...
Sign Up
Latest From MRS

Our latest training courses

Our new 2025 training programme is now launched as part of the development offered within the MRS Global Insight Academy

See all training

Specialist conferences

Our one-day conferences cover topics including CX and UX, Semiotics, B2B, Finance, AI and Leaders' Forums.

See all conferences

MRS reports on AI

MRS has published a three-part series on how generative AI is impacting the research sector, including synthetic respondents and challenges to adoption.

See the reports

Progress faster...
with MRS 
membership

Mentoring

CPD/recognition

Webinars

Codeline

Discounts