FEATURE1 March 2011

Testing times

In his experience in the world of advertising, Jacob Wright has found research can be as much a hindrance as a help. Where is it going wrong and how can it be fixed?

Res_4004702_Jacob_Wright

Research: Do you like market research?
JW:
Absolutely in theory but very rarely in practice. It should be very useful, but there are structural and methodological reasons why it often isn’t. I don’t think it’s because of a lack of willing from people in the research industry, I think it’s the interaction between research and large corporations that’s essentially problematic.

I would love to see much more scientific and particularly psychological rigour applied to the way research is carried out. We now know a hell of a lot more about how the human mind works and how advertising works than we did when I started in this industry in the late 1990s, and I don’t see that reflected in the way that we test advertising. There is a massive wealth of literature which shows that asking people why they’ve done things or what they think they’re going to do is not an accurate method of predicting people’s behaviour.

“There are lots of convenient fictions out there which do nobody any good, and make it more likely that we as an industry produce stuff that consumers out there find irritating and that waste’s everybody’s money”

Most researchers would say they are aware of that. But you don’t see it reflected in the way research is done?
If we take a very standard pre-testing measure, like brand appeal, it’s primarily measured by asking people, ‘Do you think after seeing this ad that you are more likely to buy this product?’ Many clients would use that as the single measure for making a choice about whether an advert works – and it’s absolutely based on introspection. It completely discounts a wealth of literature that’s out there such as Binet and Field’s Marketing in the Era of Accountability, Robert Heath’s work, Andrew Ehrenberg’s work, all of which shows that many of the effects of advertising in many cases can be due to effects that people aren’t conscious of.

If research as a proportion of marketing spend has grown, the question that the industry has to ask itself is: is their clients’ advertising actually becoming any more effective? We’re in an era when what we do is very strongly questioned for all sorts of reasons, so it’s crucial for everyone in marketing to better engage with the robust truths about how what we’re doing works. There are lots of convenient fictions out there which do nobody any good, and make it more likely that we as an industry produce stuff that consumers out there find irritating, that waste’s everybody’s money, and that makes marketing less sustainable as an activity. Business likes to look at itself as faster and more responsive than government, but you’re in a situation now where you see the Conservative Party more engaged in things like behavioural economics than the marketing industry is, and I just find that utterly laughable.

Has the way research is used got worse during your time in advertising?
I think I have seen the reliance on pre-testing get heavier. It’s a seductive thing to be able to justify why you ran an ad that didn’t work very well, or if you’re a global brand director in London trying to make sure your guy in Vietnam is producing acceptable advertising. That’s why it’s so prevalent, and for some organisations that’s exactly what they need. But research that is optimised to help marketers make decisions is not optimised to maximise creativity and effectiveness. They are different objectives.

I also think it has become a less intellectually rigorous exercise. You can go for a year with some clients and they will not mention anything other than a secondary measure. A lot of the things we talk about in marketing, even the idea of a brand, are useful theoretical constructs for how what we do affects people’s behaviour, but when you start treating those constructs as ends in themselves, that’s when what you do becomes unreal. It’s research agencies who are in a position to tell us when we’re wasting our time.

And they’re not living up to that?
I don’t think so, no. One observation that’s growing is that in tracking studies, individual brand measures don’t tend to move independently, they tend to move en masse. This is a great example of the kind of place where we could trim the fat a bit, and then more money could be invested elsewhere.

What role do the relationships between researchers and ad agencies play?
Advertising agencies certainly don’t help themselves in their dealings with research agencies. Most ad agencies walk into that research debrief meeting with the feeling that the research agency is about to destroy their ad. The reason there’s a gulf between the two is that most pre-testing methodologies seek to make sure the client is placing their ad in the right half of the bell curve [showing the effectiveness of different ads], and they probably do achieve that: they get rid of the disasters. However I would also maintain – and this is much harder to prove – that those methodologies make it very hard for you to make a truly excellent ad.

So what’s the right way to go about it?
I think things have to be assessed against how they’re designed to work. Hall & Partners, from what I understand about their approach, will start from, ‘What’s the advertising strategy here?’ and will then tailor their methodology to that. Another company that I think have a very interesting approach is BrainJuicer, who avoid that problem of explanation because they ask consumers for a straight-out emotional response.

Another thing I think we need more of is post-testing – did the campaign work? Were I in charge of an advertising research agency, I’d be thinking about how to shift my clients from investing in pre-testing to investing in post-testing and econometrics. And digital media offers us a great opportunity to be faster about how we do things like post-testing, so there’s loads of hope.

“Were I a client, I wouldn’t invest in pre-testing. I would far rather have, for example, a payment-by-results system with my agency”

But clients need to know whether something’s going to work now, not after the fact.
If you’re the advertising agency, you’re saying, we as a company have x years of experience of producing advertising, and based on that we recommend to you what we think is going to work. To then have something which, frankly, can seem reductive and methodologically dubious held up as being superior to that expertise is very bruising to the ego and, I think, not always supported by the evidence.

Were I a client, I wouldn’t invest in pre-testing. I would far rather have, for example, a payment-by-results system with my agency. I’d invest my money in testing whether stuff had worked and in doing econometrics, and then reap the efficiencies in how much money I paid to the agency. If they’ve done a good job, they get more cash, I get better results. If they’ve not done a good job, they get less.

To me that’s a better use of the client’s budget than testing something which is not what you’re actually going to make, in a situation which is not analogous to the one in which the thing you’re making is going to be, and using a methodology that doesn’t accurately reflect how the thing you’re making is going to work.

You can’t give up on pre-testing, though, can you?
The question to ask is, how do you do it in a cost-effective manner? I’ve worked with a client who has tested a regional ad 27 times – for the cost of that you could have made the thing twice over. What a research agency is well placed to do is to find quicker and cheaper ways to do pre-testing, because to me the cost-benefit equation is not there. I think the way to success is to work back from what the business effects are, to work out what are good predictors of those. The truth about many of the legacy methodologies out there is that they’re based on what people thought about marketing back in 1960 or 1970, and because there’s been a desire to keep using the norms and to build on the way things are done, we’ve been stuck with them.

The biggest name in pre-testing is Millward Brown. What do you make of their approaches?
If I were in a situation with a client who had a clear product advantage and a rational reason to choose that product, I would be happy to test it with Millward Brown. In other contexts I’m less sure. What they have that I would love to get my hands on is an incredible dataset. But were I in their position I’d be worried, because the media environment is changing, and one has to question how relevant all of that data still is.

Is there a way around the tension between ad agency creatives and research agencies, or is it inevitable?
Ad agencies are less bothered about being told whether or not their work is good or bad; what they are bothered about is being told how to fix it. The integrity of something as a piece of communication is quite a delicate thing, and to have someone come in and tell you, ‘If you moved this scene from here to here then your ad would be better’ is a very difficult thing to swallow – and quite often wrong, because I don’t think, by and large, the people who are writing those research debriefs are people who have made many successful pieces of advertising in their lives.

But isn’t it a cop-out to just tell researchers, ‘It’s all terribly complicated and ultimately we know best’?
No, I don’t think it is. If researchers come to an ad agency and say, ‘People love your ad, the problem is that at the end they can’t remember what the brand was,’ the ad agency is going to go away and try and come up with a solution that preserves what’s entertaining and interesting about the ad and the bits of it that are on-strategy – but we want to be in a position to prescribe that solution.

The views expressed are Jacob Wright’s own and not necessarily those of Mother

2 Comments

13 years ago

Working in advertising research, I found it really interesting to read Jacob Wright’s opinion on where the research community go wrong, but I also found it slightly disappointing, if not wholly surprising, to see some of the misconceptions he has about common pre-test methodologies (and I should say now that I blame the research community for allowing these misconceptions to spread). Jacob is absolutely right that there are some very simple introspective questions used in some pre-test methodologies, to measure some aspects of the performance of an ad. The example he cited, ‘more likely to buy [after seeing the ad]’, is one used by most pre-test providers in some guise or other. Clearly, this is the type of measure that is most sensitive to ads that present a clear rational reason to choose a product. Pre-test methodologies offer measures like this one because a lot of advertising is trying to do that and because it is simple introspective measures like this that have been shown to correlate best with short-term sales shifts (which speaks to Jacobs point about ‘working back from what the business effects are, to work out what are good predictors of those’). However, it is not right that all pre-test measures are like that or that they haven’t moved on for the last 30 years. In fact, along with these introspective measures of rational product choice, most pre-test methodologies now offer vast arrays of emotional measures (similar to those offered by Brain Juicer) and a good pre-test supplier would always be looking to start from an understanding of the specific ad objectives before designing the research and reporting the results. Even beyond the measures of emotional response offered by most pre-testing suppliers (many of which are founded on and validated by solid psychological theory and more recent neuroscience learnings), there are now also a variety of indirect research techniques emerging, which allow us to understand the sub-conscious as well as the conscious response to the advertising creative. These include eye-tracking, brainwave measurement and Implicit Association measurement. On the whole, these techniques serve simply to prove that there tends not to be that great a difference between the conscious and sub-conscious (if an ad triggers an emotion, we’ll usually know it and generally be able to articulate it, or at least be able to agree that we enjoyed it). However, they certainly add another layer of understanding and diagnosis. So, there are in fact a very wide range of very powerful pre-testing research tools on offer from a variety of suppliers that fulfill many of the criteria that ad agencies as well as clients would set. But Jacob’s views are totally understandable given how pre-test research is sometimes administered and delivered (as if off a conveyor belt) and how it is used in some client organisations (as a 1 or 2 number stick to beat agencies with). Also, the point about the difficulty in balancing the research objective of maximising creativity against that of having a simple global decision-making tool is a valid one. So, the challenge for research agencies is to use all the powerful research tools at our disposal as constructively as possible. Engage better with ad agencies as well as clients, to make sure we’re measuring things that matter and can work together to use this measurement to improve the end result (not by the research agency prescribing the creative solution, but through them fully explaining and diagnosing both the strengths and weaknesses of the creative). Rather than allowing pre-testing to be reduced to a one number, introspective measure of likelihood to buy!

Like Report

13 years ago

Please please please get in touch!! Your comment - 'These include eye-tracking, brainwave measurement and Implicit Association measurement. On the whole, these techniques serve simply to prove that there tends not to be that great a difference between the conscious and sub-conscious (if an ad triggers an emotion, we’ll usually know it and generally be able to articulate it, or at least be able to agree that we enjoyed it) - is, simply, wrong. Our inability to introspect and report on our emotions is why these techniques have emerged. There can be HUGE differences between sub-conscious and conscious measures of the same ad. I'd be happy to prove it to you.

Like Report