Where human intelligence should start

AI is changing how we work, but not always how we think. The next wave of research progress isn’t about replacing human intelligence with machines; it’s about learning where human intelligence should start.
That belief has shaped much of my year. In early 2025, when talk about AI in research was at full volume, I wanted to see if it could genuinely help us do better qualitative work – and what that might reveal about how we think as researchers.
Together with Bath-based tech company Rocketmakers, I ran a small experiment. We recreated a study I’d previously carried out for a major arts organisation, exploring how teachers discover and use creative learning resources. This time we used YourRoom – an interactive space where four synthetic “participants” appear as avatars and respond in real time. Each was powered by a different large-language model, giving every voice a slightly different tone.
What made YourRoom interesting wasn’t just the avatars, but the group dynamic. They reacted to one another, agreeing, disagreeing, building on ideas – much like a live focus group. Themes formed as the conversation unfolded. That interplay showed how AI can mirror the way insight develops collectively, not just in isolated answers.
For 45 minutes, I moderated the session exactly as I would a human group, just without worrying about providing sandwiches or them checking their phones, and then compared its output with six teacher interviews from the original project. I wasn’t there to see if it was faster or cheaper; I wanted to see how close AI could get to the real thing, and, in doing so, what it might show us about our own way of working as researchers.
The AI group was quick to surface the functional realities of teaching: lack of time, limited budgets and the constant pressure to adapt materials. What it missed were the textures that make those realities human: the humour, the small flashes of pride and the quiet frustration of people trying to make things work.
In the real interviews, teachers talked about gluing things together at home or repurposing old CDs for science experiments – whatever it took to make lessons engaging. The AI spoke instead of “collaborative networks” and “resource optimisation”. Accurate, but lifeless, omitting the energy, dedication and small sacrifices teachers make every week, often spending their own money on resources.
We all know AI lacks nuance; this showed why that matters. Nuance isn’t decoration – it’s direction. Without it, research can map the ‘what’ but miss the ‘why’. I saw someone online argue recently that, given the cost of real research, most clients would happily “stuff your nuance”. Maybe so, but that’s like saying you’re fine navigating with a map that’s missing all the roads.
Nuance matters. If you’re designing resources or policy for teachers, it’s not enough to know they’re struggling – you need to understand how, why and the lengths they already go to. Those details help build better solutions. Without them, we risk creating elegant answers to the wrong problems.
When we act on sterile insight, we design around caricatures of people rather than their realities. We build things that make rational sense but miss the emotional truth of what’s needed. That’s how good ideas quietly fail – not because the data was wrong, but because it wasn’t alive.
One finding caught me completely off-guard. We built two versions of the AI group: one packed with detailed teacher profiles, and another stripped back to the bare minimum. The lighter version felt more natural. It produced freer, more spontaneous exchanges, as if the less we told it, the more human it became. It reminded me how good moderation actually works: give just enough structure, then stand back.
Trying to script authenticity kills it. Teaching AI to sound human ended up reminding me what makes human moderation work in the first place: the ability to listen beyond the words and follow where the energy in a conversation naturally wants to go.
That leads to the bigger truth. AI can analyse and articulate, but only humans interpret. The gap looks small on paper but it’s vast in practice. It’s the difference between describing behaviour and understanding it, between hearing words and hearing people. Interpretation – where empathy, judgement and imagination meet – is still the territory machines can’t touch.
After comparing both sets of findings, I stopped thinking about replacement and started thinking about sequence. AI is brilliant at mapping the landscape, spotting patterns, surfacing themes and sketching hypotheses. Human research is the expedition – exploring emotion, contradiction and surprise.
That’s why I’ve started thinking of AI as more of a scout than an explorer. It can go ahead, get the lay of the land, point out where the interesting hills might be, but it can’t climb them for you. Used that way, it doesn’t take anything away from the human part of research; it just helps you start from a better place.
We can use AI without losing authenticity. In fact, authenticity can be the output of using it well. When we bring it in deliberately – to sharpen hypotheses and free up our attention for the nuanced parts – the work often feels more authentic, not less. It’s not about faster answers; it’s about smarter starting points.
That’s the trap, I think; because AI is quick and gets you 80% of the way there, it’s tempting to stop there. But that final 20% – the messy, human part – is usually where the real insight lives. If all we want is a neat summary, then yes, AI’s good enough. If we want understanding, it isn’t.
The hype has cooled 10 months on and the opportunity is clearer. AI and human intelligence aren’t competing forces; they’re complementary tools in the same craft. The skill of the modern researcher is like a jazz musician knowing when to let the band take the lead and when to step forward for the solo.
The real challenge for our industry isn’t learning how to use AI, it’s remembering what not to hand over. The listening, the empathy, the messy last 20%; that’s the part that still defines good research, and it’s the part worth defending.
Tom O’Dwyer is founder at Wavelength Research
We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.








0 Comments