OPINION4 June 2012

The disruptive technology challenge


Technology provides plenty of opportunities to source quick customer feedback for companies. IJMR editor Peter Mouncey asks: what are we losing in the rush to deliver?

A few years ago, in the pages of the International Journal of Market Research (IJMR), Cambiar’s Simon Chadwick warned us of the dangers to the market research sector posed by disruptive technologies (‘Client-driven change’, Vol. 48 Issue 4, 2006 ). There was an interesting example of this in the March issue of Information Age: a case study describing the new real-time process that Eurostar has implemented for measuring customer satisfaction.

The article uses the phrases ‘system’ and ‘software vendor’ to describe the text-based method for collecting feedback from customers based on two questions: ‘How was the experience?’, and ‘Would you recommend it to a friend?’ Respondents can also add open-ended text comments.

Technologies from new competitors provide opportunities to deliver very fast results to companies, displayed in imaginative ways. But what gets compromised along the way? And does it matter?

Here’s how the system works. Eurostar sends a list of customers who have provided their mobile phone number to the software company who randomly select a sample of 1,500, from which they expect around 150 responses. These responses appear on a live feed in a web portal, with Eurostar then using a visualisation application to plot trends. Data is then fed in to create a model that blends company performance with customer reactions. All this replaced what Chris Hardey, Eurostar’s customer intelligence manager, described as a “slow”, quarterly reporting process which made it “hard for the customer intelligence team to respond to issues in a meaningful way”.

The modern Eurostar example provides an interesting comparison with the process that my colleagues and I created working for the Automobile Association in the mid-1990s, which was described in the 1995 MRS Conference paper, ‘Will you still love me tomorrow’. Back then we used a market research company and every day they were sent a sample of randomly selected records automatically generated from the previous day’s list of roadside breakdowns. The sampled respondents were mailed a 28-question survey designed to ensure that the underlying drivers of satisfaction could be identified, based on extensive qualitative research.

The customer view was analysed together with AA service data. We achieved response rates averaging 45-55% (using one follow-up), and reports were produced monthly. Based on the findings, highly successful changes were implemented in the structure, culture and operation of the core roadside service in a highly competitive marketplace.

As we see with Eurostar, technologies from new competitors – sometimes working outside the ‘traditional’ research sector – provide opportunities to deliver very fast results to companies, displayed in imaginative ways. But what gets compromised along the way? And does it matter? For Eurostar, mobile technology delivers that quick response, but only from those customers who provide a contact number. Feedback content is very limited, and there’s that low response rate of 10%. But maybe that’s all the client needs to fine-tune the service experience (even though by then the customer ‘has left the building’).

At a recent Association for Survey Computing conference, Mark Hirst of Watermelon described the process developed to encourage Post Office customers to use digital methods to report their service experience. Again, the emphasis was on using new technology to obtain instant feedback, but without any apparent attempt to see if these responses represent the views of the overall population of users. Maybe there is a further survey-based process that delivers the context.

However, in many situations I believe that measuring customer experience requires a more complex research design to ensure that the findings provide a firm basis for decision making – especially if the outputs will be used as key inputs to decisions that lead to re-allocation of scarce resources, cultural change or building a case for major investment.

Peter Mouncey is editor in chief of IJMR, a sister publication to Research


12 years ago

This is a really interesting article. As for the "does it matter?", the question for the users of "instant research" is "is this representative of all of our users?". However, the context of the collection is also an issue. For those using social media technologies such as Twitter, the data is in the public domain, so there's a PR angle to consider. This isn't there on a privately collected Customer Satisfaction Survey. I think the challenge for the MR industry is to compete with these newer technologies on the turnaround times on delivery of data. It's also important that in selling our services we point out the differences between unrepresentative reactive technologies and representative data. Both have a useful role to play for our clients, but distinguishing between "apples and oranges" and concentrating on the quality/reliability of data could be our focus. It's great that these challenges keep us on our toes, but clearly stating that "new" may not be "better" does need to be said too.

Like Report

12 years ago

Peter –I also agree it’s an interesting article. I just want to clarify that yes, the Post Office does have a range of customer feedback channels that ensure a true cross sectional view is taken. The Post Office is keen to be inclusive to all its customers, giving them channel choices that suit them to give their feedback, how and when they choose. These channels include both new and older technology. Using a range of feedback channels methods is about widening the opportunities that customers have to feedback at an individual branch level as well complimenting the depth that is captured at the granular level through more considered feedback gathered via traditional channels.

Like Report