OPINION13 December 2010

Mixed methodologies: a recipe for more error, not less?

Opinion

Mixing research methodologies doesn’t automatically increase the validity of a study’s findings, argues Philip Graves. If the contribution of each part isn’t clearly understood, particularly in terms of its psychological validity, there is a very real risk of making erroneous conclusions, he warns.

The basic premiss, that more information is better, is hard to refute, but the underlying requirement must be either that all that information is equally valid, or else that there exists some sound basis on which to analyse each selectively to produce an accurate composite view.

Imagine discovering your compass had an intermittent fault that resulted in you getting lost half the time: would you really go and buy another navigation device and use it with your compass in the expectation that this would make finding your destination easier? Mixed-method approaches provide the most fertile ground for indulging the human mind’s capacity for confirmation bias – our ability to see that which fits our prior or initial assessment and to overlook or downplay whatever doesn’t. With two occasionally conflicting navigational devices we select the route that looks better to us because it fits with our expectations of what the route really ought to look like (but if we get lost it’s not our fault).

“Mixed-method approaches provide the most fertile ground for indulging the human mind’s capacity for confirmation bias”

Recently the Office of Fair Trading (OFT) published a report entitled Advertising of Pricing. Part of this report explains that the OFT has investigated the psychological phenomenon of framing (where prices are judged not in absolute terms, but with unconscious reference to the numerical context in which they’re presented). So full marks to the OFT for embracing the importance of psychology, fewer marks for wasting tax payers’ money by conducting further research and concluding: “As predicted by the psychology review, different price frames have different effects and the effect of some price frames are more powerful than others”. Kahneman and Tversky weren’t making results up thirty years ago.

In addition to confirming that framing exists in regard to pricing, the OFT conducted qualitative and quantitative research. And here we start to see the problems of mixed methodologies. A psychological review has supported the notion that price framing happens, experimental research has reconfirmed it, and now the OFT wants to know what consumers think about it. The nub of the issue is that framing is an unconscious process; the whole point is that it influences people precisely because consumers don’t think about it consciously. When the qualitative and quantitative research take what is processed in the unconscious mind and focus it in the conscious the results are, to a psychologist at least, reasonably predictable: cue that report that says consumers want lots of choice, really low prices, everything to be presented in a straightforward way and that they aren’t influenced by brands or advertising.

Let’s look at one price practice in more detail, so-called ‘drip pricing’. This describes situations where extras are added in between selection and payment. It is the pricing practice the OFT believes consumers object to most. In the survey we “learn”, after a series of questions designed to identify respondents who have experienced drip pricing, that: 83% found the extras cost more than they had expected; 55% would have bought differently if they’d know the final price at the beginning; 51% believe they could have bought the same product more cheaply elsewhere; and more than 80% would behave differently if they found themselves in the same situation again ( 18% wouldn’t do anything differently). If this data is to be believed what is the problem with drip pricing? Apparently, most people learn from their experience and are transformed by it into consumers who will shop around more ( 41%), choose extras more carefully ( 20%) and check price comparison websites ( 19%). Less than 2% of people interviewed didn’t know what they would do differently (or else were too apathetic to post-rationalise an answer).

The OFT summarise their research results as follows:

“A significant proportion of people felt they had coped poorly with price framing in that they considered they could have obtained a better deal and would do something different the next time they encountered such an offer.”

In my opinion two innocuous words reveal the confirmation bias present in the report: “in that”. The inference has been made that people felt they had coped poorly with price framing because they believe they could have shopped better and would act differently in the future. If the research is to be believed isn’t it equally the case that people said they had learned from a negative experience? (There was no question that I could find asking people if they felt they’d “coped poorly” with price framing or discussing the somewhat curious notion of “coping” with advertised prices in any form.)

“We’re at a new frontier of consumer understanding… we need to be prepared to recognise that some of what we have believed up to this point has been wrong”

Perhaps the OFT’s interpretation was “informed” by the qualitative research. Leaving aside the point that, in my view, there is sufficient psychological evidence about how people behave socially to show that focus groups are too flawed to be worthwhile references for such work, the priming suggested by the discussion guide arguably undermines anything collected from it: “Today we’re talking about pricing, and the way prices are presented to attract you to buy (italics added).” Soon after, price structures that, ordinarily, are interacted with unconsciously become the focus of the group’s conscious scrutiny, blithely ignoring the fact that there is no direct mental path through which the conscious mind can access the unconscious processes that support it. The suggestive introduction will now ensure the discussions reach a predictable destination.

As soon as the methodologies were mixed, as soon as the psychological literature review and psychological research were combined with conscious interrogative techniques, the authors were inviting one conclusion, that price framing can “harm” consumers. Scrutinising most unconscious influences would, in all likelihood, elicit a similar response; people don’t like the fact that they make decisions that don’t stand rational scrutiny: we cling to the delusion that we are conscious agents of our actions.

We’re at a new frontier of consumer understanding, but in order to embrace what psychology is telling us we need to be prepared to recognise that some of what we have believed up to this point has been wrong. Mixing research methodologies doesn’t automatically increase the validity of a study’s findings. If the contribution of each part isn’t clearly understood, particularly in terms of its psychological validity, there is a very real risk of making erroneous conclusions. Understanding the psychology of shopping is essential for anyone involved in understanding consumers, but part of understanding the nature of a consumer’s mind is recognising when not to believe what he says when you ask him what he thinks.

Philip Graves is a consumer behaviour consultant and author of the book Consumer.ology: The Market Research Myth, the Truth about Consumer Behaviour and the Psychology of Shopping (recently named one of Amazon UK’s top ten best business books of 2010 ).

5 Comments

14 years ago

Adding additional methodologies won't make up for flawed methods or flawed interpretations. The author seems to be making a case against mixed method studies based on one study with flawed interpretations and possibly flawed methodologies. This doesn't automatically make all mixed method studies flawed. I have seen mixed method approaches work VERY well in the private research world when the research team/researchers involved have clear, well-formed methodologies and good synthesis of the resulting data. The proof ends up being in how well the study results match observables in the real world. This is the case even with single method approaches. This article reinforces the need for care in the creation of mixed method approaches and the interpretation of data. I do not see a compelling argument against the use of mixed method approaches in this argument.

Like Report

14 years ago

I agree with the poster above. It is an interesting point, but from the example you give it seems more like the OFT were just asking a stupid question: ‘What do consumers think about their own unconscious processes?’ From that point on, any mix of methodologies would produce meaningless results.

Like Report

14 years ago

Seems odd to focus on the amplification of repeated errors instead of the great effect layers of brilliant research can have?

Like Report

14 years ago

I am confused by the author's argument - the compass illustration does not help his cause, as the compass may be faulty, but using the 'other' device (assuming it is not faulty) will show a conflict in the findings with the compass, and there lies a need for further investigation, or a need to issue a warning that the results may not be fully accurate, without further verfication of the results. And this article makes a central truism, but makes a meal out of it. Of course mixed methods doesn't ALWAYS equal better validity. Rubbish in, rubbish out. All methods, whether mono or multi methods need to rigorously considered in the design stage, and ensure that they adequately collect data without introducing bias.

Like Report

14 years ago

If you use mixed methods how do you amalgamate the information? For instance, I still see companies doing qual before quant and qual after quant. Either they want to understand what to quantify or to understand better what the numbers are telling them. But I agree that these are times when confirmation bias is very likely. Market researchers seem to make the default assumption that they have identified something valid at each step of the way. We might argue over the extent of the problem (and the author of this article clearly believes it's much bigger than most people), but how do we know when research findings are accurate? The answer seems to be, "We can tell." As a professional industry wanting more credibility for what we do, do we really think that's good enough? I don't. The OFT aren't a marginal institution using research for the first time. They do a vast amount of research and use it to inform big decisions that have a profound impact on many businesses. A big market research company has been involved in this work (it's easy to find out who they are) and I think we should be extremely concerned about it and more willing to look at our own approaches objectively in light of the flaws being pointed out.

Like Report