FEATURE1 February 2012

Let’s take a long hard look at ourselves

Features

Researchers like to think they provide cool-headed support for tough decisions. In fact, we’re subject to the same quirks and weaknesses as the people we study, says BrainJuicer’s digital culture officer Tom Ewing.

The recent vogue for behavioural economics among researchers has thrown a welcome spotlight on the mechanisms of how people make decisions. Researchers have always known consumers don’t do what they say they’ll do, but the challenge of behavioural economics is to do something with this knowledge – to create research that fits our better understanding of human decision-making, not simply accept rational models as our best bet in the face of a chaotic process.

But for an effective piece of research, we need to understand more than just consumer decisions. We researchers ourselves, and the people who use our work, also make important choices. What can our improved knowledge of human decision-making tell us about this?

There’s always a possibility that the answer is “nothing”. Environmental context, social influence and our own unconscious biases may shift our decision when it comes to choosing a brand of toothpaste, but surely we’re able to see past those when it comes to selecting a technique or supplier?

On the basis of studies on other experts, this doesn’t seem likely. One famous study by researchers at Ben-Gurion University looked at parole decisions made by judges – expert minds, chosen for their intelligence and consistency. They found that if your case was heard before the judge took a break, you had a far lower chance of leniency than if you were heard after. Simple hunger and tiredness were causing judges to take the easy route of continuing the status quo and denying parole. And if judges are unconsciously irrational, what hope is there for the rest of us?

Behavioural economics is long on fascinating studies and short on coherent ways to make sense of them. So at BrainJuicer we’ve started using a behavioural model combining insights from behavioural economics, psychology, cognitive science and other disciplines to guide us when we start looking for behavioural insights on a project. We believe that ‘system one’ thinking – intuitive, fast, emotional and unconscious – guides most of our judgements, while slower, more rational ‘system two’ thinking is used mainly to back them up. System-one decisions are influenced by both the environment and the people around us, and lie behind many personal cognitive idiosyncrasies. This framework – environmental, social and personal – is what we’ll use to examine the research process.

For an effective piece of research, we need to understand more than just consumer decisions. We researchers ourselves, and the people who use our work, also make important choices

Research in the spotlight
We know that the type of music playing in a shop can affect a consumer’s choice of brand. But the influence of environment isn’t limited to the immediate and physical – it’s also evident in the choice architecture, the set of options presented to us as we make a decision. One of the key bits of behavioural knowledge is the ‘paradox of choice’ effect, which suggests that as our choices widen we find it harder to make decisions – not something any research buyer is likely to disagree with, in the face of such a range of very similar solutions.

Clients have resolved this by using rosters and mandates to narrow their choice architecture, but in building these rosters they may be victims of another bias: diversification bias. If you go to M&S for a sandwich today, you’ll have a fair chance of predicting which you’ll have. But if you’re asked to name in advance the sandwiches you’ll eat this month – building a sandwich roster – you’ll be far less accurate. After two days of your favourite sandwich, you imagine, it will be time for a change – but by the time the third day actually comes you stick with the prawn mayo and ignore that cheeky focaccia. Diversification bias leads people to pick a wider range of choices than they will actually make. If agency rosters are subject to it, then those selected should be wary of opening the champagne too early.

The choice environment of research can be oblique, but its social context is far more transparent. It’s no surprise that social proof – doing things because other people do them – is so powerful within such a small and close-knit industry. Nor is this anything to be ashamed of: for all marketers’ talk of zagging where others zig, we are (as Herd author Mark Earls regularly reminds us) an animal designed for copying. Thinking for oneself, while hugely important in some circumstances, is also rather rare: it’s system-two thinking, which our brains avoid where possible. So behind the rational benefits of norms and trackers lie powerful emotional forces: if predecessors and competitors tested their advertising or asked attribute questions in particular ways, moving against that involves denying cognitive biases towards habit and conformity.

Social proof also explains why particular topics dominate some levels of the research industry and go unnoticed in others. Talking about gamification, storytelling or indeed behavioural economics seems like a successful strategy for those tweeting or speaking at conference, so it’s copied by others who tweet or go to conferences a lot.

Individual heuristics and biases, which guide much of human decision-making, also play a part in shaping your position on research issues. Take endowment effects, for instance – a bias leading people to over-value things they own. A kind of endowment effect surfaces in research when individuals feel they ‘own’ particular concepts or insights under test – making a negative result far less acceptable. Spreading ownership of things can dilute the effect, which is a good reason to use workshops in innovation.

An example of individual bias is the ‘curse of knowledge’: once you’re an expert in something, it becomes far harder to explain to others because you overrate their level of knowledge. It’s easy to see how that affects a debrief

Another example of individual bias is the “curse of knowledge”, popularised by Chip and Dan Heath in their book Made to Stick. This is rooted in our habit of projecting our own mental processes on to others: once you’re an expert in something, it becomes far harder to explain to people because you overrate their level of knowledge. It’s easy to see how that affects a research debrief. On the one hand, researchers overestimate their clients’ knowledge of (and interest in) methodologies, and on the other, clients are frustrated by researchers’ inability to grasp ‘business issues’ which appear dazzlingly clear to them.

Don’t panic
Research is laced through with behavioural effects – in its choice architecture, its social context and at the personal level. But it’s striking how many behavioural tricks that take advantage of these effects are already woven into the fabric of business life and presented as hard-won wisdom. For instance, the practice of arranging seating in meetings to break up a client or research team is solidly rooted in social behavioural insight, but it’s also a basic part of any negotiating or creative workshop training. Perhaps it’s not so much that research is rational, but that we’ve learned to live with our own irrationalities.

Except, perhaps, the one behavioural tic our business relies on: post-rationalisation, the need and ability to find patterns in events and convincing explanations for those patterns. In his gleeful but spine-chilling book, Everything is Obvious (Once You Know the Answer) network theorist Duncan Watts exposes humanity’s love of offering utterly convincing explanations for quite contradictory pieces of data. According to Watts, we as a species elevate common sense and received wisdom to the status of facts – we assume, for instance, that there is something inherently great about the Mona Lisa, whereas the painting’s history suggests that only historical quirk elevated it to its current status. Watts takes the claims of behavioural psychology a step further – not only are we poor predictors of our own actions, but we’re poor explainers of our own situation. For research, an industry built on explanation, this is a harsh lesson.

Reading Watts can leave you with the sinking feeling that explanation itself is futile. But the lesson for researchers from behavioural science is that whatever the fate of explanation, experimentation is not futile at all. Researchers are well placed to become experts on behavioural understanding, but most of the richest behavioural insights have come about not through research or even observation but through experiments. It’s here that researchers have much to learn and perhaps teach.

Even if sometimes our subjects should be ourselves.

1 Comment

12 years ago

Interesting to see the BE spotlight broadened from the coalface of research to wider business and industry culture too. Turning up System 2 thinking here could well pump more performance enhancing blood though our veins… and attract a hungry herd of vampires eager for a bite! Company culture springs to mind as a key consideration too – what can we learn from BE to enhance decision making in our own professional development, to bring out the best in our employees and to nudge the best working practice within our 4 walls? It's relevant through all rays of the research prism... And the list of potential biases goes on and on, and round and round! I suppose things get MOST interesting when we start to identify and work with those that are most pertinent to us.

Like Report