Seventh of the public using AI for health advice

UK – One in seven members of the public have used AI to provide health advice instead of contacting a GP or NHS service, according to research from King’s Health Partners, Responsible AI UK and the Policy Institute at King’s College London.

Healthcare worker with tablet computer

The research, featured in a report called The use of AI in UK healthcare, also said that 10% of people had used AI for mental health therapy or wellbeing support instead of a trained professional.

However, 20% of those who sought health advice from AI said the technology did not tell them to get a professional opinion, with 21% saying they opted against getting healthcare advice because of something the AI said.

The public are split on whether AI should be used in clinical decision-making, 37% in favour and 38% opposing, with 18 to 24-year-olds the age group most against its use in these circumstances ( 49%).

The public also overestimate how widely AI is used by GPs; 8% of GPs use AI in clinical decisions, but the public thought 39% on average used the technology for this purpose.

In addition, 76% felt AI tools used in patient care should be officially approved and regulated, while the top emotion felt by the public about the NHS using AI for clinical tasks was anxiety about safety and accuracy ( 39%), with negative emotions the most selected ( 63%).

The findings are based on a nationally representative survey carried out by Focaldata of 2,093 UK adults between 24th and 30th March 2026 via an online panel network.

Amy Clark, senior policy fellow at the Policy Institute at King’s College London, said: “People are already turning to AI chatbots instead of their GP – driven by convenience and stretched NHS capacity – yet the wider public remains anxious about where this is heading.

“What stands out is that women and young people are among the most sceptical, which challenges the assumption that familiarity with new technology creates acceptance.”

Professor Graham Lord, executive director at King’s Health Partners, said: “When something goes wrong with AI, responsibility is often placed on clinicians, even where they have limited control over how AI tools are introduced.

“To realise AI’s potential, we need greater transparency about what works, what is safe, how decisions are made and how issues are handled – so staff and patients can feel confident in its use.”

Professor Sarvapali (Gopal) Ramchurn, chief executive at Responsible AI UK, added: “This latest research adds to the growing evidence that the general public is trusting AI even when they should not, and not only for health-related questions but also for legal, financial and work-related issues.”

We hope you enjoyed this article.
Research Live is published by MRS.

The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.

Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.

For example, there's an archive of winning case studies from over a decade of MRS Awards.

Find out more about the benefits of joining MRS here.

0 Comments


Display name

Email

Join the discussion

Newsletter
Stay connected with the latest insights and trends...
Sign Up
Latest From MRS

Our latest training courses

Our new 2025 training programme is now launched as part of the development offered within the MRS Global Insight Academy

See all training

Specialist conferences

Our one-day conferences cover topics including CX and UX, Semiotics, B2B, Finance, AI and Leaders' Forums.

See all conferences

MRS reports on AI

MRS has published a three-part series on how generative AI is impacting the research sector, including synthetic respondents and challenges to adoption.

See the reports

Progress faster...
with MRS 
membership

Mentoring

CPD/recognition

Webinars

Codeline

Discounts