OPINION20 January 2020

New frontiers: expanding the framing toolbox

Behavioural economics Opinion UK

Applied well, framing is one of the most powerful concepts in any behavioural science practitioner’s toolbox, write The Behavioural Architects’ Crawford Hollingworth and Liz Barker in the latest of their series exploring the new frontiers of behavioural science.

Daniel Kahneman said: “An investment said to have an 80% chance of success sounds far more attractive than one with a 20% chance of failure. The mind can't easily recognise that they aren't the same."

It neatly summarises the well-known concept of framing, first illustrated by Kahneman and Amos Tversky in 1981. Theirs, and others’ later research, describes how perceptions, judgements, decisions and behaviours can change depending on how information is presented, particularly regarding whether positive or negative information is drawn to our attention.

In this article – the first of a two-parter – we will analyse how the different types of framing have evolved over recent decades.

Where we were

The first study illustrating the framing effect was developed by Kahneman and Tversky describing what has become known as ‘The Asian Disease Problem’. They asked people to imagine the US was preparing for an outbreak of an unusual Asian disease, which was expected to kill 600 people. People were asked to choose between two alternatives – one presented as a ‘gain frame’ focusing on the positives, the other as a ‘loss frame’, focusing on the negative outcomes.

  • The ‘gain frame’ condition was presented as a choice between programme A which would save 200 people or programme B where there is a ⅓ chance 600 people would be saved and a ⅔ chance no-one would be saved.
  • The ‘loss frame’ was presented as: in programme C 400 people would die; in programme D, there is a one-third probability that nobody will die, and a two-third probability that 600 people will die.

People tend to choose programme A in the gain frame, and programme D in the loss frame, even though programme A and C, and B and D, are identical; illustrating how people make inconsistent choices depending on how information is framed.[ 1 ]

This has become known as a type of ‘risky choice framing’, where people’s choices between a risky or riskless option could be influenced by whether the options were described in positive terms – a gain frame, or negative terms – a loss frame.

It was recently included in Brian Nosek’s replication studies, known as the Many Labs project, and the finding was replicated, although it found a slightly smaller effect than the original study.

In another experiment, run in 1982, physicians and patients were given statistics about the outcomes of two different treatments for lung cancer – surgery or radiation – and were asked to choose their preference. Over the long term, surgery has the best survival rates, but in the short term, surgery is riskier than radiation. Overall, however, surgery is the better option in most cases.

So, what is the best way of framing these outcomes?

Half the participants were shown the information as statistics about survival rates, and half were shown the same information but as statistics on mortality rates. Eighty-four percent of physicians who were told ‘Of 100 people having surgery 90 live through the post-operative period’ chose to go ahead with the surgery, compared with just 50% of those who were told ‘Of 100 people having surgery 10 will die during surgery or the post-operative period’.

Thinking about survival is emotionally positive and encouraging. Mortality, on the other hand, draws our attention to the fact that death may not be far around the corner.[ 2 ]

Since the early 1980s, countless studies have tested different frames in varied contexts; from education, financial planning, mergers and acquisitions, consumer goods, gambling, project funding allocations, financial investments, medical treatments, awarding of penalties, encouraging exercise and more. This plethora of studies led a team of psychologists to try to classify different types of framing. Irwin Levin and his colleagues identified three main types of framing:

  • risky choice framing
  • attribute framing
  •   goal framing. [ 3 ]

The surgery example above is an example of attribute framing – one of the simplest types of framing – where a characteristic of an object or event is the focus of the frame. Another example of this type of framing is asking people to choose between minced beef presented in two different ways: ‘75% lean’ or ‘25% fat’? Most people choose – and rate more highly – the beef described as 75% lean, even though they are the same product.[ 4 ] 

The third type, ‘goal framing’, is when the goal of an action or behaviour is framed. Either positive consequences of performing the behaviour are highlighted, or the negative consequences of not performing the behaviour are highlighted.

This is a considerable difference from attribute framing since in goal framing, both frames imply that the behaviour is good to do, but influence individuals by either highlighting its benefits, or drawing attention to what the individual might lose by not doing the behaviour.

For example, a study in 1987 looked at how to frame breast self-examination to women, presenting these two options:

  • Positive frame: ‘Research shows that women who do breast self-examinations have an increased chance of finding a tumour in the early, more treatable stages of the disease’.
  •  Negative frame: ‘Research shows that women who do not do breast self-examinations have a decreased chance of finding a tumour in the early, more treatable stages of the disease’.[ 5 ]

In this study, the researchers found that the negative frame was more impactful than the positive frame; women were better motivated to avoid a loss – missing a tumour – than they were to attain a gain.

Another example of goal framing drawing upon the concept of loss aversion is the framing of credit card surcharges. Cash discounts and card surcharges are ostensibly the same thing, yet often they feel very different to the consumer, and can lead to differing decisions.

For example, consumers are generally more willing to receive a discount to pay in cash (a positive frame) than they are to pay a surcharge for using a credit card (a negative frame). Since people are typically loss averse, they would likely be more willing to give up a discount (the gain) to use their card, than accept a surcharge (the loss) for using their card.

In the 1970s, as credit cards began to become a more frequent form of payment, retailers wanted to be able to transfer the cost of processing credit card payments (typically 1%) onto consumers by adding a surcharge to card payments. 

The credit card lobbyists were understandably against such a surcharge, but as it looked like the bill allowing retailers to charge a fee would pass, they instead focused their efforts on stipulating the language used. Specifically, they asked that the cost be labelled a ‘cash discount’ rather than a ‘credit surcharge’. Customers saw the ‘cash discount’ as a bonus, with the credit card pricing as the default or regular price. If the frame had been the other way around, the credit surcharge might have been viewed by many as a loss, or an additional cost.[ 6 ]

In their analysis, Levin and his colleagues also speculated on the likely mechanism behind framing.

Why is it that a positive frame feels so different to a negative frame when it’s essentially the same information? They identified a negativity bias – our tendency to weight the negative aspects of an event or stimulus more heavily than the positive aspects.

Negative information tends to receive more processing power and contributes more strongly to the final impression than does positive information.[ 7 ] Negative things, e.g. unpleasant thoughts, emotions, or social interactions and harmful/traumatic events, have a much greater effect on our psychological state than neutral or positive things.

Some researchers speculate that in an evolutionary context, bad news or negative information may have often signalled danger. So, learning to identify and being quick to act on potentially hazardous situations was vital for survival. Today, we are still wired for such self-preservation.

Current developments

In recent years, three further types of framing have been identified, centred on how to best convey information so that it is most easily or accurately understood.

  1. Natural frequencies. There’s a surprisingly big difference between how we react to probabilities expressed as percentages (say, 10%) and how we respond to odds expressed as frequencies (one person out of every 10 ). Percentages are abstract and hard to imagine, meaning people often make perceptual mistakes in interpreting percentages. Natural frequencies, on the other hand, are much easier to imagine; particularly for less numerate people.[ 8 ] As psychologist Paul Slovic explains: “If you tell people there’s a 1 in 10 chance of winning or losing, they think ‘Well, who’s the one?!’ They’ll actually visualise a person.”
  2. Number size framing. We also react differently to the same number change at higher and lower numerical values. In other words, we are more sensitive to a unit change from two to three than we are to a unit change from 102 to 103. In practise this can have somewhat shocking implications; we are, for example, sadder upon hearing about a third death after two reported deaths, than we are upon hearing about the 103th death after 102 reported deaths.[ 9 ]
  3. Relative versus absolute risk. The final effect is very relevant for how we can best communicate statistical information; for example, a change in the value of risk or the impact of a policy, innovation or intervention. There are two main approaches to communicating such change – absolute change and relative change – and these approaches effectively frame information differently. 

Journalists, politicians and scientists seem to have become experts at leveraging the relative value of risk or change. Here’s a simple example: if a medication reduces the mortality rate from 20% to 15%, then the absolute reduction is a modest 5 percentage points, yet the relative reduction in risk is 25%. Studies show that in general, people are more strongly persuaded by the change in relative risk because it seems larger.

Science writer Tom Chivers recently highlighted the danger of this type of reporting: “According to a widely reported study published in the BMJ [in November 2018 ], if you father children in your fifties, your children are more likely to suffer various health issues, including seizures. Specifically, if you are aged 45 to 54 when you become a father, your children are 18% more likely to suffer seizures than if you are 25 to 34.”

Most people will probably find this alarming – 18% feels like a dramatic rise in risk. But looking at the absolute increase in risk is far more reassuring because seizure risk from fathering at any age is tiny. Chivers points out: “Your child’s absolute risk of suffering seizures if you have a child when you are 30 is 0.024%: that is, 24 out of every 100,000. If you have a child at 50, it is 0.028%.” In reality, it will affect an additional four people in every 100,000.[ 10 ]

David Spiegelhalter, professor of the public understanding of risk at the University of Cambridge and expert in the framing and communication of risk says “Relative risk is fine for scientific inference [...But if you want to help people make decisions about their life], it’s useless, ... It’s totally the wrong measure. You cannot decide what’s an appropriate action without absolute risk.”[ 11 ]

Reference:

[ 1 ] Tversky, Amos; Kahneman, Daniel ( 1981 ). "The Framing of decisions and the psychology of choice". Science. 211 ( 4481 ): 453–58

[ 2 ] McNeil, B., Pauker, S., Sox Jr, H., Tversky, A. “On elicitation of preferences for alternative therapies” New England Journal of Medicine 306 ( 1982 ): 1259-62

[ 3 ] Levin, I. P., Schneider, S. L., & Gaeth, G. J. ( 1998 ). “All frames are not created equal: A typology and critical analysis of framing effects. Organizational Behavior and Human Decision Processes”, 76, 149-188.

[ 4 ] Johnson, R. D. ( 1987 ). Making judgments when information is missing: Inferences, biases, and framing effects. Acta Psychologica, 66, 69–82.

[ 5 ] Meyerowitz, B. E., & Chaiken, S. ( 1987 ). The effect of message framing on breast self-examination attitudes, intentions, and behavior. Journal of Personality and Social Psychology, 52, 500–510

[ 6 ] Thaler, R. “Toward a positive theory of consumer choice” Journal of Economic Behavior and Organization l ( 1980 ) 3960

[ 7 ] Baumeister, Bratslavsky, Finkenauer, Vohs, 2001

[ 8 ] Malenka, D. J. et Al. ( 1993 ). The framing effect of relative and absolute risk. Journal of General Internal, 8( 10 ), 543-548; https://www.researchgate.net/profile/John_Baron2/publication/14928379_The_Framing_Effect_of_Relative_and_Absolute_Risk/links/0c96053b2efe083038000000/The-Framing-Effect-of-Relative-and-Absolute-Risk.pdf

[ 9 ] Slovic, P. ( 2007 ). "If I look at the mass I will never act": Psychic numbing and genocide. Judgment and Decision Making, 2( 2 ), 79–95.

[ 10 ] Chivers, T. “Double the risk of death! The problem with headline health statistics” New Scientist, November 2018

[ 11 ] ibid.

0 Comments