Res_4012387_retail_shoppers_shop_shopping_458

OPINION15 October 2014

Measuring satisfaction

Opinion

The term ‘customer satisfaction’ is used frequently by marketing professionals and other industries alike. We understand that keeping customers satisfied is important, but why is it important to measure this satisfaction, and how is the measuring done?

Brands measure customer satisfaction for many reasons. Purchase behaviour is the largest driver in customer satisfaction measurement need. Presumably, the more satisfied a customer is, the more brand loyal they will be. The more brand loyal they are, the higher sales will climb. This type of relationship creates a positive correlation between customer satisfaction and purchase behaviour. Measuring customer satisfaction in order to predict other variables is key.

Once a company has tangible statistics about how satisfied their customer base is, taking action towards positive brand changes is that much easier. If customers are highly satisfied with one domain of an organisation, i.e. 4G LTE service, but highly dissatisfied with dropped calls, marketers can identify customer concern and tailor ad campaigns, or new product launches to weaker brand areas.

Net Promoter Score

Net Promoter Score (NPS) which looks at the willingness of a customer to recommend this product or service to friends or relatives is frequently used.

In addition to the NPS, companies may employ some sort of data tracking over time to measure customer satisfaction. Data tracking is primarily collected longitudinally and compared from year to year or advertisement to advertisement. Data tracking serves as a great way to compare product launches, advertising reach and customer data. Although these insights are valuable, they are hardly ever used to their full capacity.

Problems with measuring  

Most customer satisfaction measurement, including the NPS, is fundamentally flawed for several reasons:

  1. It isn’t actionable. We measure satisfaction, and in the case of NPS the likelihood of recommending, but then find out that most people don’t actually make recommendations.
  2. We don’t see differences in the data: most satisfaction measurement surveys include a range of ratings across a large number of metrics ( 30-50 items is common), using a Likert scale. However, our research suggests that only a fraction of the real-estate in a scale is used by a respondent – 80% of respondents use only one or two response items for 75% or more of questions. When everything is rated a three or four out of six, how do you know what needs improvement?
  3. It isn’t linked to cost data: the biggest problem is that all too often we only look at the statistical drivers of satisfaction, without an understanding of the cost to improve those services. The biggest drivers to improve often have the biggest cost to implement. We ignore improvements that can be leveraged with a better ROI – relatively inexpensive changes that produce a disproportionate amount of value.

These challenges are looked at in more detail, in Tim Glowa’s book Measuring Customer Satisfaction, along with how customer satisfaction research can be less descriptive and more actionable

@RESEARCH LIVE

1 Comment

9 years ago

Many CSat programmes force customers to provide ratings on everything. It is much more effective to allow customers to choose what they rate you on (and to 'hide' the rating scale so it doesn't look daunting). If customer's decide what to rate you on, it makes the customer's task easier, it means weakly-held views are not mixed up with strongly-held views and gives more relevant and actionable data since only the priority areas for the customer get measured.

Like Report