FEATURE1 February 2011

Apply with care

Nine out of ten researchers agree that inflated survey-based claims in TV and magazine ads are infuriating. Adam Curtis, associate director of Outlook Research, searches for a solution.

Res_4004560_cosmetics_2

We’ve all noticed the small print that pops up at the bottom of the TV screen when advertisers are making claims about their products. For experienced research professionals, these disclaimers can raise more questions than they answer. The cosmetics industry is particularly prominent in its use of research to persuade us how products can make us younger, shinier and lusher. Let’s take a look at what they’re saying.

In advertising for its Grow Luscious mascara, Revlon declares that “96% saw instantly longer, lusher lashes”, while Rimmel proclaims that 93% agree “that lashes look remarkably long” after using something called Lash Accelerator. At first glance these seem to be pretty impressive claims (and may well have the desired effect of stimulating interest and perhaps trial), but we should look even closer.

For a start, both ads admit to using lash inserts and enhancing the image in post-production, which immediately creates a somewhat inflated impression of the product’s benefits. The male voiceover in Rimmel’s ad purrs that “lashes look up to 80% longer instantly”, a claim for which no evidence is offered other than the image on screen (which, as we know, they’ve had to enhance).

It is the research industry’s duty to tackle head-on the use of survey-based claims in ads. If not, we risk those in the public domain devaluing the research industry as a whole.

Secondly, how can the claims of “instantly longer, lusher lashes” and “remarkably long lashes” be substantiated, and what does this mean to the consumer? As was recently highlighted by the BBC’s Watchdog programme, the issue is not confined to mascara. Pantene Pro-V claims that its shampoo and conditioner give up to 60% more volume – but this is compared to unwashed hair rather than a competing product (or even ordinary soap). Rimmel claims that its Vinyl Gloss lipgloss makes lips “up to 80% shinier”, but shinier than what? The answer turns out to be bare lips – and there’s no indication of how they’re measuring shine.

Here comes the science bit
If we take these claims at face value, a percentage figure in the region of 80 or above seems rather good. However, it is clear to the more eagle-eyed researcher that these scores are far from impressive given the small sample sizes on which they are based. Rimmel claims that 79% of women agree that the “Smart-Tone technology” in its Match Perfection powder “mimics skin tone”. The small print (in accordance with Advertising Standards Authority guidelines stating that all claims have to be substantiated and the basis for them made clear) reveals this was taken from the opinions of just 52 women. Statistical analysis of this result uncovers a margin of error of +/–11, which means that this headline score of 79% could in fact be as low as 68% – far less inspiring than the original figure. This is also true of Clarins (which uses a sample of just 44 women for claims made about its Vital Light Day product) and Revlon (whose claims about Grow Luscious mascara, mentioned earlier, are based on only 53 women).

Another behemoth of the industry deserves a mention for a product in its male skin care range. A recent Gillette ad claims that 85% of 68 men agree that the Fusion razor “has five blades with an anti-friction coating that allows them to float comfortably”. Does this mean that 85% of the men agreed that the razor has five blades? Did the others think it had a different number of blades? Or does it mean that 85% of the men agreed that it has an anti-friction coating that allows the blades to float comfortably? One assumes the latter, although surely ‘comfort’ is a prerequisite for a sharp instrument designed to be drawn across the face every day. As tennis star Roger Federer strokes his face with pride, the voiceover concludes: “No wonder that Gillette Fusion research is recognised by the British Skin Foundation.” Indeed.

Questions and answers
None of the above fall foul of ad regulations – but others have. The Advertising Standards Authority reecently upheld a complaint about a TV ad for Nivea Visage Anti-Wrinkle Q10 Plus which stated that “37% of women feel more attractive now than they did 10 years ago” – a claim that had been taken from an unrelated attitudinal study and shoehorned into the creative for this product.

Research asked the other advertisers mentioned whether they thought their research was robust enough to support their claims. Revlon had not responded at the time of publication. Coty, which owns Rimmel, said its testing methods were “undertaken in line with regulatory, legal and industry requirements and that it is “committed to maintaining the highest standard of business practices”. Pantene said: “The claims made in our cosmetic advertising are legal, decent, honest and truthful, as required by the independent UK advertising regulator codes of practice. They are based on sound science and rigorous testing and are relevant and meaningful to consumers.” Clarins said: “Our Vital Light TV ad was approved by Clearcast to ensure that it is compliant with the BCAP TV advertising standard code. We supplied them with all the necessary documentary evidence to substantiate our claims.” The firm did not respond to a request to share the same information with Research. Gillette said its ad had been approved by Clearcast but provided no further information.

None of the brands volunteered any methodological details about how their studies were conducted.

A call to action
It is the research industry’s duty to tackle head-on the use of survey-based claims in ads. If not, we risk those in the public domain devaluing the research industry as a whole.

We need to work with the Advertising Standards Authority to create new guidelines regarding research-based advertising and to highlight the fact that stating what these claims are based on is not enough: consumers are simply not aware of what this means in reality and are immediately sceptical when presented with reams of small print. I believe that minimum sample sizes – 200 per cell would be a good start – for any claims-based research is one sure-fire way we can help to protect the integrity of our industry (Direct Line base their latest home insurance claim on a sample of 281 nationwide incidents). Not only would this reduce the margin of error to a more respectable maximum (+/–7 ), it would also standardise the claims of all brands so that consumers can make more accurate assessments and informed choices.

In the meantime we need to continue to ask tough questions about the research-based claims that advertisers use. We also need to continue to demonstrate our value as experts by providing robust research to substantiate all claims, while remaining as cost-effective and timely as possible. I’m aware that this is what all good researchers and research agencies continually strive to achieve, and that more often than not it is the client that dictates the quality of research that can be achieved within their budgets and timeframes. In the long run, strengthening these partnerships and upholding a sense of what is honest and fair will serve to benefit both clients and agencies.

This article was written by Adam Curtis with additional reporting by Research

8 Comments

13 years ago

I was moaning about this only last night. Needless to say no-one else in the room seemed to share my distress. Add above points to the plethora of made up words and one could suggest some adverts can be pure nonsense. A very good point and I entirely agree that it is the research industry who should be driving the quality of these claims. For me the excuse that someone else will provide it if we don't, doesn't stand up. It is about the long game and supporting the integrity of our profession.

Like Report

13 years ago

One of the few pleasures I glean from ridiculous TV ads is parsing the small print highlighting their research methods! But it was my understanding that the ASA is now more aware of this kind of issue, and advertisers are playing accordingly - for example, those including figures based on cell sizes <100 are becoming increasingly rare. Also, it's not just isolated to cosmetics - Lurpack Spreadable (I believe) had to pull an ad featuring Gary Rhodes because it claimed that people preferred it to Utterly Butterly. On closer inspection, it turned out that 45% of respondents had opted for Lurpack; a slightly smaller number had gone for Utterly Butterly, and around 10-20% had given no preference. I guess ultimately, whatever the regulations, advertisers are always going to abide by the letter rather than the spirit of any rules, and the only solution ultimately will be ever more restrictive regulation.

Like Report

13 years ago

We're plagued with this issue down here in Australia as well. I loved the spin that Rimmel is "committed to maintaing the highest standard of business practices", which ipso facto means that a sample size of just n=52 is "the highest" so God knows why Gillette bothered with n=68. But clearly, how can such global behemoths be expected to stump up for even your suggested n=200 minimum sample size - they'd be on Poor Street before you know it. Congratulations Adam on calling them out on this load of poppy-cock.

Like Report

13 years ago

Couldn't agree more with the point here. This should be a topic the MRS is more stringent on - and being more proactive with it. We have a duty to protect the users of the products, aswell as the research. I would not put my name to a study that sought less then n=100 sample for such a massive brand. Another point to consider - are the n=68 etc samples coming from a US study? If so, I would hate to think what else they could get away with - not entirely 'UK correct' is it?

Like Report

13 years ago

I have no particular interest in defending the use of surveys in this way, but at least the advertising brands know what they're doing when they conduct and publicise them. This, it strikes me, is somewhat more justifiable than the rest of the quantitative survey industry pretending that its results represent some kind of objective measure of something meaningful; when in fact the vast majority of survey responses are a by-product of a wide variety of influences brought on by the process of asking questions and the timing and location in which they are asked. For instance, how often do you see a statistic reported about a financial matter that says, "If we had asked this question on-line we would have got a different answer."? Or one from any opinion measure that says, "If we'd asked a different question before this we might have got a substantially different answer."? You don't see it because the truth about survey responses is one of research's dark secrets. Taking issue with small sample sizes seems to me to be an irrelevance. The statistics are pure and open, albeit I doubt most of the viewing public understand confidence intervals; but it's the flawed process of asking questions that causes the problem. The application of confidence intervals should be banned from all surveys because it's an abuse of statistics when the underlying data is inherently invalid. For example, take a larger survey, such as the one conducted by the BBC recently about the choice of football club to take over the Olympic stadium after the games. This was a dangerously flawed piece of research, complete with contradictory responses and selective reporting of the results: they failed to focus on the statistic that found only 4% of people wanted the stadium's usage to be as was intended from the bid that 70% of the same sample said they believed should win. The only reasonable conclusion was that people didn't understand the issue; not much of a headline in that though. It wouldn't have mattered how many people were asked the BBC's questions, their media-coverage-influenced responses (coverage at the time was massively over-simplifying the debate about the bids in favour of West Ham's), along with whatever priming had taken place within the questionnaire (not all of which was published) was just as much an abuse of research as anything Rimmel or Clarins has done. I think it would be great to see the research industry tackle the use of survey-based claims in ads, but it can only do so with any integrity if it is prepared to reflect on its own use of quantitative research data. The notion that such surveys are routinely capable of providing a objective measurement is simply a myth and there is an abundance of evidence that, were clearer disclaimers to be required, would leave most surveys with lists of disclaimers longer than the original questionnaires.

Like Report

13 years ago

It is unfair to criticise use of "research" results in TV ads when research industry itself is misusing survey sampling and probability theory. It is very clear under what conditions researcher can use confidence intervals. How many research companies and clients disclose their research methods? How often we see report claiming "representative" sample have been used from which inferences about general population are made when it is obvious to anyone that this claim cannot be made if online panel or any incomplete sample source was used as a sampling frame. O'Muircheartaigh (1997) proposed that error term be defined as "work purporting to do what it does not do". The same applies for a lot of research.

Like Report

13 years ago

Nine out of ten? I see what you did there ...

Like Report

13 years ago | 1 like

Peter, Nick, John and Richard - thanks for your comments. Glad to find yet more people who feel the same way about this as I do. Anon - I would argue that all reputable research agencies currently disclose their research methods as a matter of course (and in line with their respective quality accreditations). Both yourself and P. Graves fail to understand (or acknowledge) that the research industry has never purported to providing a 'definitive answer' to any given question; it exists to provide the best information and evidence available in order to inform and support the decision-making process for both client and consumer alike. Clearly then, when any given advertiser publicises research results that are both misleading (through the use of unsubstantiated claims, confusing terminology etc) and ill-informed, they are not acting in accordance with the research industry's 'unwritten code' and raison d'ĂȘtre.

Like Report