FEATURE16 January 2013
FEATURE16 January 2013
Helen Roberts, retail research director at GfK, contests that till receipt surveys ( TRS ) are not the perfect research solution they appeared to be upon their conception.
It’s easy to see the appeal of till recipet surveys ( TRS ). Invited to share their comments online, customers are incentivised to take part by a prize draw. Data is returned at individual store level and can pinpoint time of day and even the member of staff involved in the transaction; therefore placing a financial value on the level of satisfaction, or dissatisfaction, experienced by customers. The prospect was compelling: offering actionable results and low running costs. No wonder so many major retailers embraced the concept with gusto.
The fact that the surveys are conducted online was part of the attraction. It means that responders are internet-savvy, indicating that they may be higher-spenders who are willing to engage. However, I have harboured doubts as to the effectiveness of TRS since trialling it for a client. And with more clients approaching us for hard and fast facts on the technique, there are pros and cons that must be explored.
Perhaps the most important, yet most surprising factor is the extremely low response rate – less than 1%. Surprising because till staff often point out the survey and the prize draw, hence shoppers can be expected to keep hold of their receipt for a time and surveys are promoted as quick and easy to complete. So what went wrong?
An inherent problem with the TRS is that only purchasers are invited to comment, so missing the swathes of potential customers who decide not to purchase, but whose views may hold just as valuable an insight. Arguably, non-purchasers provide greater insight because they will highlight important issues such as barriers to purchase and customer service failures.
Participants are therefore not a representative sample of the retailer’s shopper base, and responses tend to be extreme in terms of level of satisfaction. This has led retailers to share their concerns that they are not gaining a full understanding of the true customer experience after all.
Such a small, skewed minority might not be the most effective method for providing the basis for a retailer’s action plans and while a prize draw may be a useful tool to incentivise response, GfK’s findings have shown that participation rates are higher among middle-aged and older people who complete multiple surveys.
Ultimately, however, the retailers themselves will draw their own conclusion from the surveys and whether participants are higher value customers voicing truly actionable comments.
Another dilemma is that since their inception, TRS have been increasingly infiltrated by marketing messages and data collection. Whether to advertise the launch of new products or generate customer data for future campaigns, the content of the surveys has rarely remained true to its original purpose. This not only muddies the water for the retailers’ research teams, but might put off customers who otherwise would have responded.
It’s not all bad news
Whilst we doubt the efficacy of TRS as a legitimate long-term research method, we do not deny its potential as a first response mechanism, and it has been useful for retailers as a quick fix to suspected issues such as stock availability and staff behaviour. Thus TRS, if used as a tactical tool, has credence and a place among the routes to customer feedback.
There are many, from the tried and tested, but relatively expensive interviewer administered survey to visually appealing in-store technological approach, via pods or tablets. Although the latter have a high set-up cost, they are less expensive to administer than interviewer-led research. But both of these techniques are more immediate and may encourage all comers to participate, whatever their inclination to purchase or their satisfaction level
Email and SMS surveys offer a lower cost and potentially more effective approach, but both rely on the validity of the initial contact data. The nature of SMS messaging places a limit to the number of questions that can be asked and email surveys may elicit fear of fraud, as well as lack of control over who responds, and when.
Finally, phone and postal surveys are similarly low cost, but largely discounted by agencies and retailers alike as they tend to be slow with low response rates.
The method undoubtedly has fundamental flaws in terms of representativeness of the data and can easily be abused to gather a raft of customer feedback and data of questionable value or as a marketing tool. This said, its low cost makes it an attractive proposition and it’s certainly useful for providing rapid feedback that can be used to monitor and adjust customer service practices. In conclusion, although insufficient as a stand-alone technique, used correctly TRS can have a place in a portfolio of customer feedback tools.