I am getting increasingly angry about the number of posts, books, You Tube Videos and articles – often by market researchers themselves – that imply “conventional Market Research” is a failure.
Here’s a good example, ‘futurist’ Patrick Dixon talking about why market research is “often wrong”: http://tinyurl.com/25kp34z .
These sorts of pronouncements tend to have several things in common:
- Flashy style and grand pronouncements rather than reasoned argument,
- Reliance on anecdote or case study (in Dixon’s case it’s his mother),
- Lack of examples on the other side of the argument (when MR got it right),
- A (false) assumption that the raison d’etre of MR is predicting “big” changes,
- Failure to acknowledge that methods other than MR are not all that flash at predicting big changes or seismic shifts in behaviour either,
- An assertion that “traditional MR” misses out on some extraordinarily key factor in understanding consumers, be it an inability to capture emotion, or failure to understand the role of Social Media or whatever uber-trend the author is fascinated by.
Let me counter this hyperbolic dismissal of the value of our traditional approaches with an equally strong counter claim. I strongly believe that good experienced, senior researchers can – in most markets – answer 70% of the key marketing questions of 70% of major research clients by means of a research programme consisting of not more than a few focus groups, a reasonable sized survey and access to some sales, retail or media trend data. There is an “if” of course – and this is sometimes a big if – they need to allocate enough time and thought to carefully design the study and analyse the results. This does not mean I am not a believer in many of the new MR methods, particularly some of the new neuroscience, customer panel and online qualitative approaches — let us ‘seniors’ incorporate some of those into the research programme and my success estimate goes up to 80 or 90%! The core point I want to make though is that any systemic “failure” of market research is a failure to apply brainpower and thinking time – not primarily a failure of techniques.
Yes, there are increasingly “better” ways to directly measure emotional response than a traditional survey, but it is simply untrue to say that you cannot get any emotional feedback from a survey. I’ve done it – on topics ranging from instant coffee to prostitution and every level of intensity in-between. Similarly while online qualitative, MROC’s and social media analytics can produce great feedback, the overlap with information that could be obtained from a conventional FGD is often a lot greater than those selling the new methods are willing to acknowledge.
So why do our analytics so often seem superficial if it is not primarily about method? Well here’s three examples of common faults that we keep seeing:
- Designs and outputs that bear only a moderate relationship to the brief that was given. This is often put down as a failure of the client servicing executive, but the reality is it is often a failure of systems within the client and agency to ensure that briefings are structured properly, incorporated into design and that outputs are checked against them.
- Questionnaires that only ask one or two fairly unimaginative questions on the key client objective, and 30 others on all sorts of other, often very peripheral, issues.
- Focus groups where the moderator talks too much, over-acts and no-one actually analyses the transcripts or videos later – claiming the client will get all they need from a “topline” or a post group “debriefing”.
“Mundane” faults indeed, but it’s our failure to address such trivial problems that’s really at the heart of our industry’s seeming malaise . Often, when we talk to researchers about these issues, the fault is put down to clients – they “won’t spend the budget for analysis”, “their RFP was very rigid” or “they needed the answer tomorrow”. This is, of course, partly true — if clients pay peanuts they tend to attract servicing and outputs designed for monkeys. But usually, when you dig, you also discover issues of marketing, people and systems (added value not marketed well, senior people not spending any time on research, brainpower not valued or priced correctly, poor allocation of executive time etc. etc.).
Too often, as an industry, we seem very ready to accept the hyped pronouncements that we simply cannot measure “emotion”, or “value” or be “predictive” and hence the need to abandon all our old ways and move holus-bolus to new methods. (Nigel Hollis’ recent post on why conventional research can – usually – measure and predict value offers useful light in this context: http://tinyurl.com/2fggstp). Since, for most agencies, changing business models over-night is not on the cards, this simply engenders a sense of frustration and failure that is, in my view, unjustified. It also seems to divert research management’s attention away from the most important need: working out frameworks and processes to get better at design, analysis and reporting. The tragedy is that, in many companies we’ve observed, a few months spent addressing a number of relatively minor faults in existing approaches could yield huge dividends in improving what is delivered to clients.
Let me repeat – I’m a huge fan of the new MR methods that are coming on-stream at the moment. But I’m also convinced that the big competitive advantage for most market research companies lies in addressing some quite basic faults in research practices. Faults in how we deploy two of our key resources: brains and time. Companies that do that will be ready to embrace and integrate new technologies and methods properly, and take full advantage of their capabilities. Those that simply buy into the “doom and gloom” forecasts about the inherent failure of current research, will quickly find that the new techniques they rush to adopt will still fail to answer client’s questions because they have not been backed up by the necessary design, implementation and delivery standards.
If we have a failure as an industry – and overall I think we do a lot, lot better than we give ourselves credit for – it’s more about a failure to consistently provide quality in whatever we do, rather than primarily being about the kind of work we do. More a failure of research culture than research methods perhaps?
10 Comments
Annie Pettit
14 years ago
Ah.... So you're saying that QUALITY is what makes the difference. Couldn't agree more. We have become so focused on pitting quantitative research against qualitative research or surveys against focus groups that we have pushed aside the real issue of performing quality work from the beginning of the job, to the end of the job. Here, here for quality.
Like Reply Report
Michael Conklin
14 years ago
An important nuance of this is that quality requires time expended by senior people. There are no shortcuts in applying good thinking time to a clients problem. Unfortunately, we all seem to be trapped in an endless race to the bottom on price and timing. Senior resources applying time means either lower margins or higher prices. And even if clients allow higher prices, they rarely plan for timelines that allow senior resources to be effectively applied. Great post, BTW.
Like Reply Report
Alastair Gordon
14 years ago
Thanks Annie. I agree, but I'd go a step further: it's also about how we (agencies or MR departments) deploy that "quality" -- it isn't an infinite resource -- and ensuring people understand what really contributes to "quality". Much time (and profit) in MR agencies we look at is wasted because there is no clear prioritization of either business or research objectives, and people waste their lives doing (sometimes in the name of "quality") lots of things that make little difference to the client or standards of accuracy etc.
Like Reply Report
Alastair Gordon
14 years ago
Hi Michael - thanks. I agree it is hard to convince clients to give more time or pay more. BUT most MR units we review have at least half a dozen ways that they can free up "brainpower" - a lot of this is a wasted asset. And we are often remarkably bad at selling clients on the value of our time -- we still have a kind of CPI mentality to costing etc.
Like Reply Report
Carol Phillips
14 years ago
I wish I had written this. It is so true and i fear we are afraid to say it. Experienced senior people who are marketers as well as researchers can design efficient approaches. It's not about the tools, it's about what you do with them. Thanks for this! We have often helped clients who have a lot of data but little insight by creating focused qualitative investigations that provide meaning to the numbers. Thanks for writing!
Like Reply Report
Deb Sleep
14 years ago
Well said! I am continually amazed at the way our industry beats itself up in public for no good reason and totally agree that we do a lot, lot better than we give ourselves credit for. If we are not prepared to defend what we do then how can we expect our clients to take our work and our recommendations seriously?!
Like Reply Report
Nasir Khan
14 years ago
Bravo. MR is developing, so can't die. Researchers' inertia and short-cut methods can be threatening for the image of our great profession. Let's do research for research and not for business. If we do so, we will continue to be paid for our hard work - in cash and appreciation
Like Reply Report
Anon
14 years ago
Can you tell this to my bosses? they think everything can be answered in a survey question and allow no time for thinking and cogitation. (I work client-side after a long period as an agency researcher)
Like Reply Report
Alex Garnica
14 years ago
Kudos!. Was about time someone say something about this self flagelation the MR industry is going through. An idea for 2011: a forum to gather cases and opinions based on evidences showing how MR, both traditional and new, have done great service for business, policy making and social decisions. Greetings from Queretaro, México
Like Reply Report
Alastair Gordon
14 years ago
Hi, just looked at this, and am grateful for all the nice comments. It is time that we - as an industry - were more aggressive in defending our value. Hopefully 2011 is the year for that. But a couple of points: Nasir: I don't think there is a distinction between "research for research" and research for business. My partner (David McCallum) defines good research (from an agency perspective) in these terms: "We believe research is only successful if it is creatively designed, efficiently executed and exceeds client expectations while also providing a good financial return to the agency". No profit means no training, no bonuses and no investment in new methods. Not good for the industry or researchers! And to "anonymous" - I'd be glad to tell your bosses that - indeed some clients pay me to advise them on just such issues. But usually our reviews show that "time" is not just an issue for management - to free you up for cognition may require new research frameworks, introduction of new software and other reforms that require researchers to change their ways as well. Improving research is a bit of collaborative thing!
Like Reply Report