FEATURE1 April 2010

Can't get no satisfaction

Tom Simpson, managing director of Simpson Carpenter, asks whether the millions that companies spend on customer satisfaction research actually makes customers any more satisfied.

Res_4002567_customer_satisfaction

It might be presumptuous of me, but I assume that if you’re reading this you must believe that research works – that what we do can help improve the customer’s lot. And surely this link between research and improving outcomes for customers must be at its closest when applied to customer satisfaction surveys.

In theory, it’s all very easy: you ask your customers what you’re getting right, what you’re getting wrong and how you can improve. And then you go about putting what you’ve got wrong right.

Year in year out, businesses around the world spend more than an investment banker’s bonus on finding out what their customers think of them. They use five-point scales, eleven-point scales and all points in between. Importance is measured using linear statistics. Some track mean scores, others report the top two boxes. Still more believe that if you subtract the bottom seven boxes from the top two, the meaning of life will be revealed to you. Then companies take all these numbers and apply action standards, impose supplier KPIs and put customers at the heart of their corporate mission statements to make sure everything is part of one big continuous improvement process.

Research works, right, so what’s the problem? Well the last time I looked, not all customers were deliriously happy. Just try a little experiment and you’ll see what I mean. Talk to a few friends, log onto Mumsnet or ask a Facebook group if they’ve had a problem recently with their bank, say, or with their utility service, or when getting their car serviced. Make sure you’ve got plenty of time free when you do this because within a second you’ll be overwhelmed by outraged customers.

Why is this? How can we be getting it so wrong? Is it because we raise expectations to a level it’s impossible to meet? Do we sell the sizzle so well that no sausage can possibly taste that good? I’m sure that’s part of it. But I also think it’s because in the huge industry that is customer satisfaction we’ve allowed ourselves to lose sight of one important ingredient – the customer. Far too often customer satisfaction surveys are designed to meet management’s need for metrics to monitor performance across a range of departments and contracts.

“It’s high time for us all to stop complying with requests to bombard customers with meaningless questions in order to fulfil a need for internal metrics”

This is what leads to so many long and tedious questionnaires that have precious little to do with keeping the customer happy. We’ve seen a car manufacturer’s service customers being asked about the quality of the coffee at the dealership. We’ve seen passengers on long-haul flights being asked about the freshness of the fruit with their meal. What we haven’t seen is any evidence that beverage quality and satsuma freshness are key drivers of automobile or airline choice.

Unless things have shifted so far that we no longer believe research has a part to play in improving the customer’s lot, we need to have a serious rethink about how we approach customer satisfaction. Collectively, our role as researchers should be to put the customer back into the customer satisfaction process. It’s high time for us all to stop complying with requests to bombard customers with meaningless questions in order to fulfil this need for internal metrics.

At the risk of preaching to the converted, here are eight golden rules that we would all do well to follow if customer satisfaction research is to make the slightest bit of difference to customers.

1. Focus on the customer, not the brand
Our first and most important rule when designing customer satisfaction surveys is to remember that if you want to keep your customers happy, finding out what they want to tell you is far more important than finding out what you want to know. Put yourself in your customers’ shoes. If you don’t like being asked to rate dozens of attributes without being given the chance to nail the one thing that’s bugging you, then why should they? Get this wrong and you’ll end up with research that does more harm than good. Let the interview be customer-driven – find out exactly how you measure up across a few broad areas then dig deep to find out why. If the quality of the coffee really is a concern then they’ll tell you about it.

2. Avoid data greed
We’ve all had the request. “As we’re talking to such a big sample of customers can we just ask them a few extra questions on…” It’s hard to resist at the best of times, even harder when money’s tight. Try rephrasing the question: “As we’re talking to a lot of the people on whose repeat custom our business depends, why don’t we really bore the pants off them?” That way the message should get through.

3. Use statistics wisely
So many studies track trends in mean scores or some other measure of the average. I’d argue that this is one of the least informative measures. My first priority would always be the customers who are unhappy. Detractors are the people who can and will hurt your business and, with the advent of the digital age, a disgruntled customer can now do a huge amount of damage to a brand with just a single tweet. So you need to know why they’re unhappy and what, if anything, you can do about it.

4. Beware of subjective measures
Remember that expectations and needs vary dramatically from one customer to another. Some customers want to get in and out of the store as quickly as possible, others like a chat at the cheese counter. So you really do need to take care in interpreting subjective measures such as standard of dress – if a hardcore biker is unhappy with your sales staff’s standard of dress they’re probably looking for well-worn leathers not well-cut suits.

5. Honour the rules of engagement
Remember that, while you may spend much of your life thinking about your brand, you’re lucky if your customers spend a moment of theirs. So they’re unlikely to make the distinctions that you do between attributes such as quality, reliability and durability. And they can quickly get annoyed with having to repeat themselves. In these instances just pick one – your customers will thank you for it.

“We’ve encountered numerous sales reps who coach their customers on what scores to give on a customer satisfaction survey”

6. Beware of foul play
Linking staff rewards to customer satisfaction scores is a real minefield. Get it wrong and you run the risk of encouraging behaviour that distorts the whole process while making the customer less, not more, satisfied. We’ve encountered numerous sales reps who coach their customers on what scores to give on a customer satisfaction survey and we’ve even found a car dealer (a very big one) handing out leaflets on the subject. If measuring customer satisfaction is to achieve its ultimate goal of improving the customer experience we urgently need to make clients appreciate the danger of inadvertently triggering behaviour that damages their business.

7. Tell the right people what they need to know – and what they need to do
Research that doesn’t get to the people who need to know about it is a wasted exercise. Results from customer satisfaction studies belong with frontline staff as much as they belong in the boardroom. After all, changing behaviour in order to improve the overall customer experience is the ultimate goal and unless you can tell the people on the ground what they need to know and, more importantly, what they need to do then measuring customer satisfaction will never be more than an academic exercise.

8. Round up the lost sheep
Don’t forget to include lost and lapsed customers in your customer satisfaction surveys. You really do need to keep track of how many are deserting you and why. It may sound counter-intuitive, but failing companies often find their customer satisfaction scores rising as a small rump of customers remain loyal while all others abandon ship.

Making a difference
The aim of customer satisfaction research should be bringing about positive change for your customer. But in spite of all the millions that get spent on customer satisfaction research, I’m not convinced that customers are significantly more satisfied than they were.

I’m not suggesting that customer satisfaction studies are a waste of time, but if we believe that customer satisfaction research should have an impact on the actual customer experience we need to alter our approach.

I suspect that many researchers will be with me on this. The challenge we have will be making stakeholders understand that any customer satisfaction study is in itself a brand touchpoint. Only then will we succeed in ridding ourselves of the notion that measuring customer satisfaction should be an external measurement process and recognise that its sole function should be to listen to what customers have to say.

Tom Simpson established Simpson Carpenter in 1998. He works in the automotive, technology and retail sectors, focusing on branding and customer satisfaction. Previously he was chairman and CEO of the Harris Research Centre, the UK arm of the Sofres Group

4 Comments

14 years ago

I couldn't agree more with Tom on this. Having been following all these golden rules with our clients for the past thirteen years I just have to make it clear that some agencies do get it right. The problem is that larger agencies claim to know about satisfaction measurement but they don't. They are the ones who perpetuate long surveys which ask the wrong questions and drive the whole project the wrong way, so it is no surprise that customers going through such surveys don't see benefit and think research is a waste of time. I know there is limited space in such articles but another golden rule is to ask the right people, meaning those that actually influence purchase decisions. For consumers this is pretty simple but for B2B relationships it is much more complex. It needs to take account of the key touchpoints between supplier and client and over the years we at The Leadership Factor have learned that most suppliers do a pretty poor job of knowing who these people are (which must make it harder to build a relationship mustn't it?). Short, customer focused, meaningful surveys with the right people, properly analysed to highlight worthwhile improvement actions that have senior level commitment and staff involvement is a certain reciope for success. Running things this way we can prove that customers do get happier and business becomes more successful.

Like Report

14 years ago | 1 like

Satisfaction for me is more a measure of how dissatisfaction changes (see Kahneman on sensitivity to negatives vs positives). Satisfaction itself is an important benchmark but so contextualised that it has little meaning beyond a threshold level i.e., in terms of impact on return. Moving an 8 out of 10 to a 9 out of 10 may look to the board like a good idea but the kind of desired linear statistical relationship to value is not correct. In fact an 8 out of 10 maybe just right: what is really important is what the figure means and even more importantly what it could mean.

Like Report

14 years ago

I would be surprised if many researchers disagree with Tom’s article. Our experience is that questionnaire lengths have decreased, more intelligent filtering is used to keep surveys short and to the point and sampling is generally done more carefully. However, satisfaction surveys can also be used as continuous quality measurement and an early warning tool to identify problems. I suggest doing this with a brand’s own employees first – after all, they are usually also customers and are often incentivised to use their employer’s products. Employees should of course also be more motivated to help identify problem areas early. We have been doing this for years with a German car manufacturer and it would certainly work in other businesses, for example in retail or in consumer goods manufacturing.

Like Report

14 years ago

It is great to see Tom opening up this area for some scrutiny. He is absolutely right that many millions are wasted on pointless satisfaction surveys. And he has identified one of the key reasons for this. Too many organisations use sat scores for internal back patting or staff remuneration purposes. I have even heard of a major organisation who excludes all anonymous results, the ones most likely to be negative, and still praises itself for how well it is doing. Until the Chief Insight Officer has a seat on the board we may not have the voice of the customer being heard strongly enough. In addition, many ad hoc satisfaction surveys use question sets designed without recourse to the latest thinking about what drives B2B or B2C relationships. Organisations could certainly incorporate new thinking, such as Customer Value Management, Customer Relationship Quality (CRQ) Management and the impact of differentiation to get better value from their budgets for instance. The appetite for change in satisfaction research is growing. Clients want more valuable intelligence, faster. And we support that 100%. Maybe it's time for all the old traditional inward-focused sat surveys to be updated. What do you think? laurence.obrien@deep-insight.com

Like Report