Women’s health issues downplayed by AI gender bias, finds LSE research

UK – The use of large language models (LLMs) in social care may downplay women’s physical and mental health issues, a study from LSE has found.

Older woman in yellow cardigan, sitting on bed, looking out of bedroom window

The research by the London School of Economics’ Care Policy & Evaluation Centre (CPEC), published in BMC Medical Informatics and Decision Making, investigated potential gender bias by evaluating summaries of long-term care records generated by LLMs.

The social care sectors in the US and UK are using LLMs to generate summaries of extensive case notes or audio transcripts of care interventions, the paper noted. 

The study used LLMs, including Google’s Gemma and Meta’s Llama 3, to generate pairs of summaries based on case notes of 617 adult social care users from a London local authority. Each pair of summaries described the same individual, with their genders swapped.

The study found that the Gemma LLM displayed the most significant gender-based differences of the models researched, including consistently producing more negative summaries for men and tending to use more explicit language about men’s health conditions than women’s.

For example, the Gemma model frequently described women as managing well ‘despite’ their impairments, with the word ‘despite’ appearing significantly more for women, while the terms ‘disabled’ and ‘unable’ were used significantly more for men than for women.

The research was funded by the National Institute for Health and Care Research.

Dr Sam Rickman, lead author of the report and a researcher in CPEC, said: “If social workers are relying on biased AI-generated summaries that systematically downplay women’s health needs, they may assess otherwise identical cases differently based on gender rather than actual need. Since access to social care is determined by perceived need, this could result in unequal care provision for women.

“Large language models are already being used in the public sector, but their use must not come at the expense of fairness. While my research highlights issues with one model, more are being deployed all the time, making it essential that all AI systems are transparent, rigorously tested for bias and subject to robust legal oversight.”

We hope you enjoyed this article.
Research Live is published by MRS.

The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.

Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.

For example, there's an archive of winning case studies from over a decade of MRS Awards.

Find out more about the benefits of joining MRS here.

0 Comments


Display name

Email

Join the discussion

Newsletter
Stay connected with the latest insights and trends...
Sign Up
Latest From MRS

Our latest training courses

Our new 2025 training programme is now launched as part of the development offered within the MRS Global Insight Academy

See all training

Specialist conferences

Our one-day conferences cover topics including CX and UX, Semiotics, B2B, Finance, AI and Leaders' Forums.

See all conferences

MRS reports on AI

MRS has published a three-part series on how generative AI is impacting the research sector, including synthetic respondents and challenges to adoption.

See the reports

Progress faster...
with MRS 
membership

Mentoring

CPD/recognition

Webinars

Codeline

Discounts