FEATURE20 January 2017
Hear my voice
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
FEATURE20 January 2017
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
Technology is making it easier than ever to hear what customers think thanks to two-way conversations and better analytics. But there’s a balance to be struck between easy and useful, as Tim Phillips reports.
In his April 2014 letter to shareholders, Amazon CEO Jeff Bezos described the workings of Amazon’s ‘Mayday’ support service enthusiastically. Mayday is the customer service button on a Kindle Fire tablet that opens up a small video window to a real Amazon employee, and had been launched the previous September. Customers had used it to ask for technical support, report a problem, or give feedback, but – in its first eight months of operation – Amazon discovered that Mayday was generating quite a few requests that were harder to pigeonhole. Its agents had received 35 proposals of marriage, had been asked 109 times for help ordering a pizza, and fulfilled the request to sing ‘Happy Birthday’ to users 44 times – while three customers had called to ask for a bedtime story.
“Nothing gives us more pleasure at Amazon than ‘reinventing normal’,” Bezos buzzed.
For many research companies and their clients, the pleasure of listening to the voice of the customer (VoC) is mixed with pain. As well as Mayday, research platforms such as those provided by Qualtrics or Medallia, and self-service or automated technology start-ups such as ZappiStore, GutCheck, SurveyGizmo and SurveyMonkey, have comprehensively ‘reinvented normal’, when researchers – professional and amateur – listen for the voice of the customer.
It has never been easier to assemble large amounts of data on the customer experience (CX), and CX has joined VoC, NPS and CRM in the abbreviated lexicon of customer care for all consumer-facing – and many B2B – organisations.
But what’s the best way to listen to this voice, and how do we know that what it tells us is useful? Just because we can ask the question, should we? And if that voice inspired us to act, what do we do next – and how do we know if what we did worked?
Today, the multichannel, always-on, technology-driven world of much customer experience research is a million miles from the rigorous, process-driven origins of the phrase ‘voice of the customer’. Total Quality Management (TQM) – a phrase invented only in 1969 – and the discipline of continuous improvement in manufacturing, helped popularise practices that had galvanised the Japanese economic recovery since the 1940s. Mitsubishi’s Kobe shipyards started using formal methods to manage quality in 1972, and Toyota in the late 1970s.
Organisations such as Motorola, IBM and General Electric copied these techniques to harness technological progress for western consumers during the 1980s and 1990s. While the precise details of TQM implementations varied, every one had a customer focus that formally incorporated the VoC into the design process. The power of the method was that the companies could trace a direct cause and effect between the method and the output. Mitsubishi, for example, achieved 60% reductions in design costs.
In their 1993 paper ‘The Voice of The Customer’, published in Marketing Science, Abbie Griffin, of the University of Chicago, and John Hauser, of MIT, introduced this new discipline to a marketing audience. They explained that the process “encourages other functions besides marketing to use, and in some cases perform, market research”.
They argued that existing market research captured broad-brush customer needs, but not enough detail to help engineers design improvements to their products. VoC research in this model was a company-wide process – integrated with product design – for which improvements were tested continuously against the list of customer needs that researchers had identified. R&D was also rigorously prioritised according to those needs.
The limitation at the time was how many voices the company could listen to. Research was a narrow, deep, fundamentally qualitative exercise. “In a typical study, between 10 and 30 customers are interviewed for approximately one hour in a one-on-one setting,” the authors reported. That small sample informed the whole organisation – not just an expert few – about what the customer wanted.
A quarter of a century later, VoC is ever-present. Customer experience surveys are pervasive thanks, first, to email, then to mobile technology, and now to data-collection methods such as wearables. NPS is used by more than two-thirds of Fortune 1000 companies – but some argue that more isn’t better. “We’re facing data democratisation,” says Nigel Hollis, executive vice-president and chief global analyst at Kantar Millward Brown. “You can go to ZappiStore or SurveyMonkey, create your own survey and launch it. The fundamental issue is that there’s a huge number of really bad surveys being launched.
“If I’m blunt, my concern is the response: ‘Oh God, not another satisfaction survey.’ I bought a pair of shoes the other day, and they asked me how great the experience was.”
“That sort of ‘ugh’ feeling you get when a survey lands in your inbox is definitely pervasive,” says Bri Hillmer, the documentation coordinator at SurveyGizmo.
In May 2016, Bloomberg interviewed Fred Reichheld, of Bain & Co, who – as the creator of Net Promoter Score (NPS) – has probably had more influence on modern CX practice than anyone. The article was titled ‘The Inventor of Customer Satisfaction Surveys Is Sick of Them, Too’. “The instant we have a technology to minimise surveys, I’m the first one on that bandwagon,” he said.
The assumption in the headline is clear: as we pile up the data, we risk losing the link between hearing the voice of the customer and acting on what that voice is telling us.
Carol Haney is a senior research scientist at Qualtrics, one of the new generation of research companies that, like many of its customers, was “born digital”. As a unicorn – a private company with a valuation of more than $1billion – Qualtrics is recently fashionable, but for 10 years grew organically, and profitably, by emphasising rigour over flashiness.
Its longest-standing customers are in the academic community, and it is attempting to combine its rigour with a self-service delivery method that means customers can get their data by using the most appropriate collection method, and use templates and guidance so that hopefully they won’t be purveyors of Hollis’s “really bad surveys”.
Technology has made it possible to find good VoC data quickly and easily, in many ways. It has also made it much easier to get lots of bad data.
Haney argues that understanding the customer’s motivation and emotions when giving feedback is a fundamental skill. “When you take an Uber rating, the driver is just part of the relationship, right? It makes it so simple that it takes just seconds to provide that feedback, but it’s incredibly valuable. You know it’s high quality because they are getting an enormously high response rate. But it doesn’t mean you can’t take a longer survey. We have a client who has a 30% response rate on a 25-minute VoC survey. This travel company sends out the survey right at the end of the experience, so it gets rich, immediate data with folks who are just sitting there waiting to finish their travel experience; they just had a very intense time and – at that time – want to give the feedback. It’s really the thoughtfulness that you put into where that survey is, where the client is, how intense that experience was, and how much time they have to give feedback.”
Imaginative data collection methods concentrate on making the VoC collection process passive – or even fun. Lifelogging, for example, can be much more intimate, almost a game, using devices such as Google Glass (see box, p28 ), and Kantar Millward Brown is just one established market researcher – working with Affectiva – whose facial-recognition technology can capture emotional responses without (possibly inauthentic) mediation and rationalisation. But technology can be just as off-putting – and just as misleading – as traditional methods of listening to VoC.
Haney’s colleague Juliana Smith Holterhaus – a principal research strategist at Qualtrics, with a research background in how we relate to our mobile devices – warns that the temptation among clients that “more is more” should be resisted.
“If you’re considerate and thoughtful in the way in which you engage with your customers, they will gladly provide meaningful feedback. So you don’t want to ping them excessively; think of the human on the receiving end of the message and you’ll have much better data,” she says.
One of the problems is in trying to retrofit existing data acquisition processes into a new technology format, simply because things have been done that way in the past. Tech has to be a way to reimagine VoC, Holterhaus argues.
“Some companies are very tied to benchmarks and norms in previously collected data, but others are willing to start from scratch. The ones that are willing to start again are often the best from our standpoint, because we can help them shape their strategies in a thoughtful, purpose-driven way, to determine what questions they want to answer. We can reverse-engineer the research design and technology solutions to achieve the most meaningful insights.”
For example, Unibet’s tagline is ‘By the players, for the players’, which means the voice of the customer is one of its brand values. In 2014, Unibet launched a CX project to create a greater focus on its customers. It set a defined goal: to grow customer satisfaction by 11 percentage points from 2014 to the end of 2016 – a goal that was achieved in Q1 of 2016.
But to do this, it mixed quant and qual. For the former, a customer experience survey was developed with Qualtrics, tailored to limit responses to recent transactions. “We don’t want customers to recall an experience that is too far back. And by focusing on the areas where we’re underperforming – but that are strongly related to loyalty – we are able to make financial forecasts for each project,” says Søren Moesgaard, customer experience analyst at Unibet.
One of the most important aspects of the VoC research that drove TQM was its depth; although few customer voices were heard, researchers took time to listen to all of their needs, and often discovered problems they didn’t know they had. This is why Unibet also calls people who have volunteered to be contacted, to dig a bit deeper into customer frustrations.
Beth Benjamin, Medallia’s senior director of research, thinks the discipline of text analytics has potential to create innovative “mass qual” insights that surveys can’t, and it can also be automated.
“We had the example of a cruise ship, where text analytics was the only way they could understand what was causing the problem,” she says. “The ship had been refitted, but the way it was configured made it very difficult for passengers to get to the dining facilities.” Not surprisingly, none of the closed questions in standard surveys asked about dining room layout, but that was the driver of poor customer experience. By doing simple analytics on the free text responses, the client captured the problem.
The visibility and simplicity of VoC research, though essential, creates its own problem – everyone wants to ask a question. Research-platform operators may give assistance in the software – for example, templates that reflect best practice – but there are internal pressures that make useful surveys longer or more frequent, until they lose their value.
Hillmer, the documentation coordinator at SurveyGizmo (her alternative job title is ‘survey sorceress’) tries to advise her company’s self-service clients not to over-survey – but with limited success (see box, p35 ).
In the era of TQM, measuring the return on investment (ROI) of VoC research was simpler, because the ‘before’ and the ‘after’ were easier to delineate. In manufacturing, it was also easier to isolate product improvements, and the formality with which feedback was integrated meant that attribution was possible.
But in a service environment in which CX is ‘always on’, how do we know it is working? It could be that the improvements would have been made anyway, or that customers aren’t responding to changes in their service as much as pricing or market trends – or even that their stated experience doesn’t correlate well with their purchasing behaviour.
Not everyone is brave enough to take the approach that network equipment provider Ciena took in stopping all of its CX data collection for a year while it figured out what it should be asking, and why (see box, below). So what’s the alternative?
“Some companies understand it at such a deep level that it is part of their value proposition,” says Benjamin, “They don’t need to see the impact. But for others who wish to measure, it is very hard to do rigorously – there are so many things outside your control.”
For example, using cross-industry benchmark aggregates of NPS scores – by industry – fails to capture details of how those scores are operationalised, or even the question that is being asked. You know that if you are in the top 10% your CX is good, and, if you’re near the bottom, it’s bad. It doesn’t tell us the return on taking the journey from bottom to top, or the loss from going in the other direction.
It’s also tough to isolate cause and effect. Benjamin points out that a business involved in providing a service works with a number of partners to deliver it. The experience of the brand is the experience of all those elements working together. Unless they are all measured consistently, in an integrated way, it is impossible to disentangle the impact of each. Some aspects of a service are satisfactory if they meet a baseline – for example, network coverage for a telco; others are highly competitive. Without an insight into the relative contribution of each element of the service – and a model of the likely ROI of investment in each dimension – it is impossible to prioritise which ones need to be improved, and how. This is even harder when the providers are external, and contracted with a service level agreement.
Forrester offers one of the strongest indicators that investment in CX is money well spent. Its Customer Experience Index is a league table that separates CX leaders from laggards, based on Forrester’s client research. This examined whether leaders are more profitable than laggards by comparing five pairs of companies in five industries, and measuring annual growth since 2010. In cable and retail, ‘leaders outperformed laggards by 24 and 26 percentage points’, it reported, and ‘when we compared the total growth rate of all CX leaders to that of CX laggards we saw that the leaders, collectively, had a 14 percentage point advantage’.
If the investment decision is solely where to allocate existing CX budget, that’s a simpler task. Millennials, for example – having grown up in a culture in which they expect their voices to be heard – respond more strongly to service improvements, Benjamin points out. You can measure how strongly by comparing the impact of VoC-inspired improvements in different segments. This is an advantage that incumbents have over disruptive challengers, because they have a stable base to segment. “A large company, in theory, should be able to learn more than a small company. This is innovation at scale,” she adds.
Technology has expanded the potential ways to hear VoC, and it has also changed the possibilities for distributing the information to people who would act on it. For companies such as Ciena and Airbnb (see boxes, p31 & 32 ), the flexible integration of data into the operational processes of the company – and the remuneration of its staff – echo the TQM structure that inflexibly integrated the VoC into production improvement. ZappiStore’s website states that ‘Market research takes too long and is too expensive’, and – at least in some of its work – its customers, such as Coca-Cola, LG and Ford, agree.
Christophe Ovaere, the CMO, told Impact in April 2016: “There has never been innovation around process, cost and time. We are one of the last industries that hasn’t optimised the delivery process.”
This may be one area in which traditional market research can complement technology platforms. It wasn’t always so. Hollis, at Kantar Millward Brown, set up a project to automate surveys online during the first dotcom boom. “I was running an office in San Francisco in about 2001, with 50 techs and a bunch of servers stuck in a back room,” he says. “We had a working system and some of my colleagues on the board of Millward Brown basically thought I was creating the devil – or maybe I was the devil. I was seen to be undermining the entire rationale for Millward Brown’s existence.
“These days, I’m pretty sure the senior management team recognises the absolute benefits of going down this route.”
But what, for the client, is the benefit of using a traditional research company to help it hear VoC? Although the technology and business structure are new, much hasn’t changed. Someone still has to figure out if the numbers that are being measured reflect genuine preferences or emotions. Someone has to advise on the correct mix of ways to collect the data, and how to elicit VoC that you don’t expect to hear using qualitative research or in-depth interviews. And someone can help the client to create the sort of early-warning systems (Hollis calls them “hot alerts”) to find the signal in the noise of VoC feedback.
While all of these tasks overlap traditional market research roles, automated surveys and continuous VoC programmes change the emphasis on what has value to the client – and, ultimately, to the customers.
“There are needs for shifting skill sets; we must be able to separate out noise from signal and underlying trends,” Hollis says. “If I were working at a client company I would feel much more comfortable talking to somebody who’s got experience in my category, experience in my brand, and experience beyond just doing a particular type of research.
“It does require new tricks, but on our teams – where the client has embraced an automated approach – what they’re finding is that their time is freed up. There’s less day-to-day project management, and more ‘what does this really mean?’”
The spread of affordable internet and access to technology mean modern-day consumers are more informed than ever before. With instant digital access to information, every individual and business is free to compare, contrast and purchase from firms across the globe – both online and at physical stores.
In the past decade, the balance of power in the consumer-company relationship has shifted dramatically towards the customer – be they a business or an individual. So competition for every consumer by firms and enterprises is at an all-time high – and it’s growing by the day.
If companies want to succeed in this hyper-competitive environment, they will have to create life-long brand loyalty by focusing white-glove treatment on the people who purchase their product and service.
More than ever, they must leverage the customer experience and emotional ties to succeed. Why? Because customer experience, including emotional ties, creates higher value, longer-term customers, more referrals and lower churn.
Customer experience has been largely overlooked or treated as a second-order concern, but strong customer experience separates industry leaders from industry failures. And voice-of-the-customer programmes are critical for any company looking to master customer experience.
There’s no question that the spoils go to those who lead in customer experience. According to a study by Forrester, public companies deemed ‘CX leaders’ outperformed ‘CX laggards’ by nearly four times over an eight-year period.
A separate study by Bain & Co, in 2015, found that companies deemed to be industry net promoter score (NPS) leaders experienced revenue growth nearly twice that of their peers over a comparable period of time.
And companies are doing much more than just traditional customer experience management; many of the world’s most disruptive brands are designing new and vital approaches to customer experience.
For example, Uber dynamically designs, tests and launches ad-hoc studies after every customer engagement to understand better how drivers and passengers react to different driving situations. This data is used to inform how it on-boards drivers and customers.
In addition, an international hospitality company realised that its legacy customer experience programme wasn’t capturing feedback fast enough, so it introduced a new system to collect in-moment comments from people. During their stay, guests receive an SMS text message to gauge satisfaction; any negative response triggers an alert, so the company has an opportunity to improve the experience immediately.
Intuit, meanwhile, has automatically triggered sending surveys to customers and updating account data within a Salesforce.com integration, so giving the software company the information it needs to service its customers optimally for every single interaction.
There’s a common thread to CX leaders such as these: they own their programmes. For the same reasons that these organisations wouldn’t outsource finance or customer relationship management, they wouldn’t outsource their customer experience management.
We have worked with many great companies, and have found that they want the speed, power, and flexibility that comes with owning their VoC programme. Customer experience is in their DNA. They simply want the best ideas, technology, and support to enable them to run it.
0 Comments