Sponsor

What happens when surveys become conversational?

A new independent study from the University of Mannheim compares AI-moderated interviews (AIMIs) with a static online survey to examine how interaction design affects depth of insight.

Image-RL-26-glaut

Evidence from an independent University of Mannheim study comparing AI-moderated interviews (AIMIs) with static online surveys

For decades, online surveys have been the backbone of market and social research. They scale well, are easy to standardise, and produce results quickly. This efficiency has made them essential. However, it also brings a recognised limitation: traditional survey formats often limit the depth of insights that can be obtained.

This limitation extends beyond just open-ended questions. Even when surveys include free-text responses and follow-up probes, the interaction still remains static and fixed. Respondents are asked to select, rate, or briefly explain, but are rarely encouraged to reflect, connect ideas, or articulate their reasoning within a broader context. Surveys often capture what people think but struggle to reveal why they think it or how different considerations relate to one another.

An independent study at the University of Mannheim tests whether AI-moderated interviews (AIMIs) can enhance the depth and reliability of insights compared with static online surveys.

Comparing a static Online Static Survey and AIMIs

The study, led by Aylin Idrizovski at the University of Mannheim, compares:

  • A static online survey run on SoSci Survey; and

  • An AI-moderated interview (AIMI) run on Glaut’s platform (restricted to text-only responses for comparability).

The study used a between-subjects design. Participants were recruited via PureSpectrum and randomly assigned, yielding two groups of n = 100. The sample was limited to US citizens aged 18-55, with an equal gender distribution.

Both groups completed the same questionnaire on healthy lifestyle choices. It included six open-ended questions, each supported by follow-up prompts, along with structured items and a participant experience scale. The main difference was in interaction: the static survey used predefined follow-up questions, whereas AIMI generated them dynamically based on the participant’s most recent response. Because AIMI can request more than two follow-ups, the analysis focused on responses to the first two follow-ups in both formats.

What changed in the insights

Across linguistic and thematic measures, AIMIs produced richer responses.

  • Response length increased. On average, AIMI participants wrote about 131 words, compared with 94 in the static surve: a 39% increase.

  • Vocabulary became more varied. AIMI responses included about 51% more unique words than the static survey, and lexical diversity was also higher. That combination suggests participants were not only writing more, but expressing ideas in a less repetitive way.

  • Thematic breadth increased. The study distinguishes between the number of theme mentions and the number of distinct themes.

  • Total theme mentions were similar across conditions, but AIMI responses covered more unique themes. In practice, that means a broader spread of ideas within a response, rather than more repetition of the same points.

These measures map back to insight depth: more elaboration, broader concept coverage, and more differentiated expression. Importantly, readability and content-word share did not differ meaningfully across formats, suggesting richer responses did not become harder to interpret.

Data quality held up better

The study also assessed response validity using a “gibberish rate”, defined as entries containing random characters, meaningless repetitions, or non-informative fragments.

In this dataset, gibberish responses appeared in the static online survey (approximately 10%) but were not present in the AIMI dataset used for analysis. This matters because low-quality open-ended responses increase cleaning time and lower confidence in qualitative signals.

Participants did not report higher effort

AIMIs also received higher scores for participant experience. The overall experience score was higher for AIMI than for the static survey, and item-level results consistently showed that AIMIs were rated as more conversational, less repetitive, and better at making participants feel understood and trusting of data handling.

At the same time, ease of expression and comfort remained similar across formats. The change in insight depth did not result from making participation more difficult. Instead, it came from altering the interaction structure.

Why this matters for survey research

Static surveys excel at measurement and standardisation. However, they can struggle when research demands explanation, context, and connected reasoning. The Mannheim evidence indicates that dynamic, answer-aware follow-ups can influence respondents’ responses, even when the underlying questionnaire remains the same.

That redefines a longtime trade-off. Depth isn’t just determined by question wording; it’s also shaped by interaction design.

Read the full paper here for the full study design, methodology, statistical testing, and results.

Veronica Valli is head of research at Glaut Research

We hope you enjoyed this article.
Research Live is published by MRS.

The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.

Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.

For example, there's an archive of winning case studies from over a decade of MRS Awards.

Find out more about the benefits of joining MRS here.

0 Comments


Display name

Email

Join the discussion

Newsletter
Stay connected with the latest insights and trends...
Sign Up
Latest From MRS

Our latest training courses

Our new 2025 training programme is now launched as part of the development offered within the MRS Global Insight Academy

See all training

Specialist conferences

Our one-day conferences cover topics including CX and UX, Semiotics, B2B, Finance, AI and Leaders' Forums.

See all conferences

MRS reports on AI

MRS has published a three-part series on how generative AI is impacting the research sector, including synthetic respondents and challenges to adoption.

See the reports

Progress faster...
with MRS 
membership

Mentoring

CPD/recognition

Webinars

Codeline

Discounts