FEATURE23 January 2023

Generative AI: Threat or useful addition to the research toolkit?

AI Asia Pacific Europe Latin America Middle East and Africa News North America Technology UK

Since the launch of ChatGPT late last year, the past few weeks have seen a flurry of interest in generative artificial intelligence (AI) as a means of creating content, with potentially far-reaching consequences for market research. Research Live explores the technology and its possible implications. 

Falling letters of the alphabet

What is generative AI? 

Generative AI models, or large language and image AI models, can produce text and images – from recipes and high school essays to content for businesses – using “complex machine learning models to predict the next word based on previous word sequences”, according to Harvard Business Review.  

Alexandra Kuzmina, innovation consultant at MMR Research, says: “Generative AI’s advanced linguistic capabilities enable unprecedented levels of personalisation, fundamentally transforming consumer expectations from any interactions with brands.”

What is ChatGPT?

San Francisco-based Open AI, co-founded by Tesla and Twitter chief executive Elon Musk in 2015, launched ChatGPT in November 2022. The chatbot uses a dialogue format that allows it to “answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests”, according to the Open AI website. 

What are the implications of generative AI for the research industry? 

The technology is set to be a game-changer for market research, says Kuzmina, by “supercharging existing methods by giving researchers the ability to ask questions and analyse responses in a more human-like and natural way at scale.”

Kuzmina adds: “While it’s not a magic bullet and there are risks to consider, we are likely to see a decade of innovation built on advancements in generative AI and research will enjoy a fair share of them.”

Phil Sutcliffe, managing partner at Nexxt Intelligence, agrees that generative AI is likely to have a substantial impact on the insights business.

“The opportunities for market research are immense: generative AI enables researchers to have deep conversations with consumers at scale through automated platforms, and not just in traditional form-based surveys but in many modalities, from chat based surveys to emerging possibilities such as virtual worlds with AI generated stimuli,” Sutcliffe says.

Additionally, generative AI will also help researchers to “analyse qualitative and open-ended data at scale in new ways”, notes Sutcliffe – including transcribing interviews and classifying data according to any codeframe. 

Jon Puleston, head of research innovation at Kantar Profiles, says ChatGPT can write in a “very clear, neutral way” - useful for composing survey questions – and predicts researchers will all use it as a tool to help write reports and proposals more efficiently.

He says: “I have already used it to sub-edit some blog posts, but I think its real strength for market researchers is its capability to analyse and interpret large volumes of research feedback. I uploaded a batch of answers from a survey where we asked what people think governments should do to tackle the cost of living crisis. It provided a 10-bullet point, really quite frighteningly accurate summary in about 10 seconds – a job that would have taken me a couple of hours.”

Karine Pepin, vice-president at 2CV, who has also been experimenting with the technology, says: “The most significant impact of ChatGPT has been sparking people’s creativity and generating loads of innovative ideas for addressing pain points across the business from survey design to data quality, coding open-end responses, summarising focus group discussions, and so on.”

As with any new tool or technology, there are necessary questions over how much focus researchers should be placing on it – is this a flash in the pan, or more of a systemic shift? Could it even offer an advantage over competitors? 

Richard Bowman, director at This is Insight and David Boyle, director at Audience Strategies, who have co-authored a book on the subject, Prompt, think so. “Adopting ChatGPT technology can provide a competitive advantage for market researchers. By utilising its capabilities, researchers will gain a deeper understanding of audience needs, behaviours and feedback more effectively and efficiently. This can enable them to identify new opportunities across a wider range of categories for a broader range of clients, and communicate their findings more effectively,” they say. 

This could include obtaining previously-hidden insights from a wider range of sources, thereby making more informed decisions and potentially helping researchers to identify trends sooner.

Concerns and risks of generative AI

However, researchers have also expressed concerns over the use of a technology that could exacerbate the already-pressing issue of false information, as well as the possibility of producing content that is just plain inaccurate. 

“My list of concerns is quite long,” says Puleston. “How these tools are already being used for non-ethical purposes is my biggest immediate concern; how they will proliferate the already massive poll of fake information that circulates around the web; what might happen when states start to want to control what these tools say or large unprincipled media moguls get a hold of them to promote political points of view are some of my longer term fears.”

Puleston also notes that much of the content created by generative AI “could be viewed as stealing”. He says: “If a musician repurposes a single riff from another artist, they are liable for millions for breaking copyright law. If I create a machine that copies every riff ever made and repurposes them in a way you cannot tell where I had taken it from, I am above the law of copyright and I have a business worth billions.”

Despite “significant concerns at the societal level”, Sutcliffe feels the risks for research can be mitigated by practitioners ensuring they use the tool carefully.

He explains: “In terms of market research, I’m less concerned – as long as we use generative AI wisely. There are risks, for example, I wouldn’t suggest anyone uses ChatGPT for desk research, there is no citation of sources and information provided can seem plausible but be very wrong. Another concern is bias, for example, with aggregate summarisation of verbatim data there are risks such as the AI coalescing to the mean and not representing the full diversity of opinions expressed. However, this can be overcome by putting the human in the loop, for example by using AI to summarise and then providing tools so that the researcher can review and amend the summary.”

Pepin says: “After experimenting with GPT-3, the AI engine behind ChatGPT, I realised that the quality of the output is heavily dependent on the human who trains the AI. With this in mind, I am worried that if we solely rely on AI for certain tasks like data analysis, we may overlook valuable insights.”

Bowman and Boyle advise caution in applying the technology, saying: “It is important to ensure that AI is used in a way that is transparent and trustworthy. This includes being open and transparent about the data that is being used to train the AI, as well as the algorithms and models used. Additionally, it is important to ensure that AI is being used in a way that respects human rights and values, such as avoiding discrimination and ensuring privacy.”

Returning to the point on the importance of the critical eye in the process, they add: “While generative AI can automate certain tasks and provide new insights, it cannot replace human expertise and creativity. It is important to evaluate the results of AI critically and to involve human experts in the decision-making process.”

The Market Research Society is considering the implications of ChatGPT from a professional standards perspective, and intends to issue a guidance note on the topic as part of a new series of standards documents focused on ethical and legal implications of technology on research practice. 

Jane Frost, chief executive, MRS, says: “We need to embrace technology and make it work for us. Part of this is being aware and honest about what it can do and what its limitations are. The Delphi report on insight alchemy is a good read in this context. Poujadism does not work for any sector which wishes to remain relevant, nor does blind acceptance. As a sector which is curious, adaptive and intelligent we need to own the new, not let it own us.”

0 Comments