FEATURE12 February 2024
A question of bias
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
FEATURE12 February 2024
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
Research has identified biases against minority groups when it comes to certain financial products. Phoebe Ward and Carol McNaughton Nicholls examine how AI could potentially exacerbate the issue.
Debates about how new technology impacts our lives are not new. However, the pace of this debate has changed recently, fuelled by increased access to generative artificial intelligence (AI) tools; technologies that raise fundamental questions about knowledge-making, decision-making, and what it means to be human.
As insight practitioners, we think about these challenges constantly. We need to understand how insight and evidence can help society navigate new challenges. We know how important a rich evidence base can be in cutting through complex landscapes. Crucially, evidence allows us to have informed discussion and truly examine who is impacted by change.
New insights on complex topics can trigger the need for more research and debate. In 2022, Citizens Advice explored how personal data and algorithms could be leading to discriminatory pricing for people of colour buying car insurance. The research shone a spotlight on the use of new practices in financial services specifically, and raised the question: are similar trends happening elsewhere?
The findings from the Citizens Advice research, in the report Discriminatory pricing: Exploring the ‘ethnicity penalty’ in the insurance market, raised important considerations for the Financial Services Consumer Panel, an independent statutory body representing the interests of consumers in financial services via advice and challenge to the Financial Conduct Authority (FCA).
The panel wanted to look at this issue in greater depth and understand whether the patterns identified exist elsewhere. To do so, Thinks Insight & Strategy was commissioned to examine the evidence base, exploring whether there is evidence that the use of personal data and AI in financial services decision-making is causing detriment to groups with protected characteristics.
To answer this complex question, we analysed almost 70 sources, from published academic pieces to thought-leadership articles. Sources included in the evidence base originated from Canada, Australia, the US, Europe and the UK. To reflect the global nature of the debate, we also interviewed thought leaders across each of these locations.
You won’t be surprised to hear that there is no straightforward answer to the panel’s question. However, three findings shine a light on the pressing need to use this evidence base for further debate:
Some groups are experiencing biased outcomes in financial services, as the Citizens Advice study shows – and it’s concerning.
A year after its initial research, Citizens Advice re-ran the study, finding again that customers from ethnic minority backgrounds in the UK pay more for their car insurance. Another report – Improving access to insurance for disabled people, by Scope – found disabled people sharing their experiences of travelling abroad without insurance because of the unaffordable prices they face. Ongoing legal cases in the US show black customers having to work harder to claim on their insurance. Something is happening.
It is strongly suspected that these biased outcomes occur as a result of the use of personal data and AI in decision-making. But that’s the key word: suspect.
We are familiar with the argument that bias can be reinforced and embedded into new decision-making tools using historical and ‘proxy’ data. These practices are part of the reason experts suspect bias is inherent in the technology used to make decisions by financial services – and that these practices lead to different outcomes for certain groups in a way we might not be comfortable with as a society.
However, the challenge is categorically evidencing the link between efficient, technology-led practices and biased outcomes. The use of this technology has evolved so quickly that firms cannot always be sure about how AI is being trained to make these decisions and the data points being used. Because companies do not always have personal data on protected characteristics, they cannot ‘reverse check’ for biased outcomes. This is concerning, and means experts are calling for greater emphasis to be placed on governance and transparency for those holding data and using AI.
Whatever the impact of new practices, we need to take a step back and form agreement about how fair, or unfair, we perceive different outcomes in financial services at a societal level.
Technologies aside, society needs to explore: what are fair and reasonable outcomes in the context of financial services? Who are they fair for? Is it right that older people pay more for insurance than younger people? Maybe so. Should mental health records be considered when making decisions about risk? What about physical health records? To what degree?
This type of conversation and agreement is core to any social contract that sets the rules by which we live. Our research shows it is especially needed in the context of AI development. We need to understand how technology impacts notions of fairness and influences outcomes. In the context of the new consumer duty, it is even more important.
AI and research
The principles we suggest for using AI in research are:
● Use AI to expand reach, making research more creative, expansive and engaging
● Challenge to get the best out of a project, regardless of AI involvement
● Be transparent about how AI is used in research
● Include the widest range of voices in research and address bias.
The panel has used the insights to inform recommendations to the FCA. Specifically, the panel highlights the importance of transparency between firms and consumers, calling on:
If implemented, these recommendations could lead to heightened protection for consumers.
These types of considerations also matter for our own industry’s practice, and must evolve as technology, the societal context, and our awareness of the impact of AI evolves. We are on the cusp of a new point in time when none of us can afford to not explicitly reflect on what it all means. This requires evidence and debate in society, which, as an industry, we can be pivotal in providing – while considering the evidence and impact on our own industry, too.
Phoebe Ward is associate director and Carol McNaughton Nicholls is associate partner at Thinks Insight & Strategy
0 Comments