Are we really turkeys voting for Christmas?

When Simpson Carpenter launched its humanoid avatar moderator, Quasai, earlier this year, the reaction was swift. The goal was to create new value and expand research capabilities, but some commenters saw it differently, seeing it as an act of self-sabotage. We were, one wrote, “turkeys voting for Christmas.”
This existential fear is understandable, especially when headlines keep attributing corporate job cuts to AI. But the “turkey” argument seems a gross disservice by presuming researchers are mere data collectors, rather than critical sense-makers. It also overlooks the most important people in any market research study: the participants.
Change is scary, and progress uncomfortable. But people forecasted the death of face-to-face when telephone interviews arrived, and online panels were predicted to automate us all out of a job. Using AI for data collection is inevitable, because its efficiencies are unparalleled.
The critics do have a point: reckless adoption is a death sentence. It’s dangerous to simply cede the driving seat to the algorithms. That’s why, when building the AI moderator, we didn't put efficiency first. We have a reputation to protect. We established four governing principles:
1. Redefine, rather than replace, the researcher
Firstly, we must accept that AI does change the job description. AI agents can automate processes, but they cannot yet replicate human vision or instinct. Our value shifts upwards from data gathering to becoming insight architects and engineers. We must judge what matters, shape hypotheses, and act as the arbiters of rigour, even when the goal posts shift.
As possibly the last cohort to be managing an all-human workforce, we have a responsibility to the next generation to teach director-level skills as early as possible, because tomorrow’s researchers won’t just be running projects: they will be orchestrating a suite of human and AI tools to meet strategic goals.
2. Use AI with intention
AI cannot be a cheap and cheerful shortcut to achieve the same ends. It should be deployed selectively. We look for gaps where scale, scope, cost or timing constraints would have rendered a project unfeasible – like running over 500 20-minute deep-dives in five languages with a consistent moderation style. If AI allows us to answer questions we previously couldn’t afford to ask, it’s an innovation. But if it just churns out generic data for less money, it is a race to the bottom.
For example, our drive to develop a hyperreal video avatar was not for hype, but because faces command attention. We are transparent upfront that the moderator is AI, and paradoxically this transparency, combined with the psychological response to trust what looks like us, creates a space where people don’t feel judged for their opinions. Every design choice must serve a clear research purpose like this, with integrity as a core tenet.
3. Adopt a developer mindset
We need to reassess our predilection for perfection. Researchers are trained to be risk-averse and precise, but the speed of tech development means we cannot wait for a finished product. With new models being released at speed, there is no finish line. We need to get comfortable with iteration and factor in the constant updating of our tools as foundational models improve.
4. Prioritise the participant experience
Finally, we need to meet people where they are. Our industry has long battled declining data quality, attention spans and response rates. We need to be designing research that reflects modern culture and working to include underrepresented or hard-to-reach groups – whether that’s night-shift workers or the growing number of neurodivergent people who find interacting with AI easier than a human interviewer.
Studies show people are often more willing to be emotionally vulnerable with AI. An analysis of 100tn tokens by OpenRouter found that 52% of all open model usage is roleplay[ 1 ]. Similarly, a National Bureau of Economic Research paper showed that ‘self-expression’ including chitchat, personal reflection and roleplaying, is part of how people use AI tools for non-work purposes[ 2 ].
Our own studies support this. Of almost 900 participants, 83% said they felt more comfortable sharing their opinions with our avatar compared to sharing them with a human. For topics where controversial views might shape decisions, an impartial AI moderator can be a gateway to uncomfortable truths. It meets people where they are: on a screen, anonymous and in control.
Does that make us turkeys voting for Christmas? We don’t build tools to make ourselves obsolete; nor do we do it to jump on the latest bandwagon. We are building tools to ensure research thrives by pushing our craft into new territories. And for all this talk of tech replacing us, here we are, creating new ways to harness it to our advantage. After all, Christmas is coming anyway – surely the greatest risk is making like ostriches and pretending otherwise.
Pui-Tien Man is head of growth and innovation at Simpson Carpenter
Reference:
We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.








0 Comments