Human advocacy: Shaping a risky AI reality

In a multi-disciplinary field, user research has often played second fiddle to market research, but this view now seems anachronistic.
Today, almost every aspect of our lives has either been digitised or has a mature digital analogue. Marketing, commerce and the contexts of consumption are now primarily digital. Arguably, we are ‘users’ first and ‘consumers’ second. AI transformation places user research at the forefront of both the research landscape and the human-AI relationship.
Meaningful ‘end user’ ethics
Across industry, the rapid adoption of AI calls for a new form of ethics, and new considerations of harm, privacy, power and autonomy are pertinent. Research and ethics have always been inseparable; but now, researchers must advocate ethical approaches with renewed urgency during design of digital experiences that will increasingly have capacity to erode, or enhance, human values.
Understanding end users is the daily work of user research. We will have a key role in unlocking and informing meaningful, ethical AI frameworks. User researchers, you see, are not so often given the luxury of being dispassionate. Rather we are embedded in design processes. We are the builders as well as the investigators of experience.
Agency and autonomy vs prediction and precision
Predictive analytics operate in an ethical area where design, autonomy and AI collide. Our interactive digital behaviour is analysed by machine-learning sequence models, which forecast emotional affect and preference patterns to nudge and personalise future choices.
In earlier industrial revolutions, economists – from Ford to Hayek and Keynes – reasoned that personal preferences were optimising and rational. They believed our decisions would be drivers of a liberal, individually differentiated good. More contemporary economists have identified the irrationalities rife in human decision-making. It is now widely acknowledged that we are always vulnerable to harmful choices.
With the exponential growth of predictive intelligence, never will market forces have known us better or attempted to mould our choices with such precision. In this environment, how do we as researchers protect users from undue influence or subtle harm? Where does our duty lie, and how can user researchers, aware of their responsibility, safeguard users at the inception of our digital spaces?
A ‘richer conception’ of digital ethics
As Paul Kingsnorth suggests in Against the Machine: The Unmaking of Humanity, ours is, ‘A culture with no sacred order. And this is a dangerous place to be’.
A 2024 Lyceum Project white paper from the Institute for Ethics in AI said:, ‘Many today take the view that the AI technological revolution is creating a radical new reality, one that demands a corresponding upheaval in our ethical thinking’. This new form of ethics is identified as:
A richer conception of ethics than the dominant ethical theories in the discourse of AI: on the one hand, approaches grounded in the fulfilment of preferences or the maximisation of wealth; on the other hand, approaches based on human rights law. The former are focused on considerations that are not ultimate values; the latter are incomplete, failing to recognise that considerations such as virtues and the common good are essential.
It would surely be user researchers who can lead in shaping this, ‘richer conception’ of ethics.
Already, we refer to user researchers in the language of human rights – we ‘advocate’, ‘champion’, and ‘represent’. Surely, we must recognise a societal shift, a paradigm change, where the user is sometimes dependent on the system and is inherently vulnerable. Where AI shapes experience, we need a research approach resembling stewardship of the human experience.
Seductive ease of experience
The qualities of experience that user researchers are frequently concerned with advocating for is seamlessness, characterised by ‘ease’, ‘satisfaction', and ‘delight’. Ease and convenience are not often seen as siblings of compromising experiential qualities such as glamour, outrage or pleasure. This is partly the source of its persuasiveness – we should not underestimate the seductive, corrosive qualities of ‘ease’.
Where ease untethered from strong values wins out, we commonly find ourselves instead experiencing unease. Sociologist Emile Durkheim argued that when anything seems possible but social facts and shared values break down this is, sometimes, partnered with an endless desire for something undefinable. Durkheim called this ‘the malady of infinite expectation’. As user researchers then, should we be true advocates, and shift our focus away from ease, and towards social wellbeing, cohesion and flourishing?
Not only should user research adapt and orient how we build AI then, but also, AI has the potential to re-position and amplify the importance of user research itself.
Redefining user research responsibility
AI calls for user researchers to responsibly protect the sovereignty of user needs as owned by the individual and society; and to be sensitive to the ‘erosion by algorithm’ that exacerbates outrage, atomisation, polarity and anomie (a social condition defined by a breakdown of moral values, standards or guidance for individuals). User advocacy is now a prescient appeal to create truly human-centric systems, and an important way for us to adapt a diffuse AI-environment to ourselves.
The IEAI identifies a deficit of public engagement with these issues – an engagement that user researchers are perfectly positioned to facilitate:
'All too often, the debates around AI regulation are conducted as a dialogue among a narrow set of technocratic elites, with the perspective of ordinary people whose lives are increasingly affected by these technologies consigned to the margins.' (Lyceum Project white paper)
User researchers, working ever more closely with ‘ordinary people’, might increasingly investigate long-term effects, risks, social change, veracity and value. User research extracts the whys, without which advancement becomes impoverished.
Embracing artificial intelligence asks us as researchers to engage our critical and ethical capacities with a new acuity; and to be brave in extracting, advocating for and articulating civic hopes for the future.
Kate Charles is a user research and insight professional
We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.







0 Comments