“Can it handle irony or sarcasm?”
This is normally the first question thrown at me when I talk to clients about text mining solutions. There seems to be a major concern that any machine-based sentiment analysis won’t be able to decode the deliberate use of language that states the opposite of the truth. And indeed, any automated sentiment analysis will always struggle with irony and sarcasm.
But in order to understand the size of the problem, we took several thousand open ended responses to customer satisfaction surveys and investigated how many of these used irony or sarcasm. We found that the average incidence was 1%. Assuming that the usage of irony and sarcasm should be more frequent in customer feedback than in other research-relevant text sources such as blogs, forums or inbound customer email, the 1% represents a conservative estimate for the general error rate of sentiment analysis due to these rhetorical devices.
Positive or negative? It’s often not that straightforward
A 1% error rate is negligible from a research perspective, as we are used to dealing with samples and related errors that almost always exceed this small mark. A greater amount of uncertainty comes from the simple fact that we usually do not know the intention of the author of the text data we analyse. One example verbatim from a car dealership customer: “They will not carry out any work without asking me first.”
What did the customer want to express with this statement? This can be read as a positive feedback, if we assume the customer wanted to express their delight with being in control of what is done to their car – and therefore in control of the cost. But it could also be that the customer was getting annoyed by (possibly repeated) calls from the workshop, and rather wanted the necessary work to be done as quickly as possible. The point is: without additional information, we cannot be 100% sure if this is a positive or a negative statement. If we gave this statement to 100 coders, we would certainly not get a totally consistent answer, and I am sure that the error rate due to inter-coder reliability issues would be higher than 1% in such cases.
Sentiment models need to be domain specific
Interestingly, we have seen far fewer of these ambiguous cases in some other domains, as for instance in hotel reviews. Here, the expressed sentiment is often much easier to decode. Hotel reviews naturally contain more sentiment-carrying words (in particular adjectives) because the experience of a hotel stay is usually a more emotional and personal one than having a car serviced. Also, the language can differ significantly across domains. The adjective “long” can indicate a positive aspect in the context of smartphone reviews (“long battery lifetime”), but a negative in the context of retail customer feedback (“long queues at the check-out”). Hence, context has to be considered, which means we cannot trust a one-size-fits-all sentiment model. Models have to be trained or fine-tuned to your respective domain.
Sentiment analysis needs to be supported by clever NLP
There are so many more aspects to consider when doing sentiment analysis – not least the ongoing debate of applying supervised machine learning vs. unsupervised, lexicon-based models. But there is one more important thing worth considering. Sentiment analysis on its own provides a very one-dimensional view of the reality. Everything is coloured into black and white, is either good or bad, but this usually neglects the context. What is being talked about, what words are being used, and who is talking, all this is important to literally put sentiment into context. This means that sentiment analysis needs to be complemented by smart Natural Language Processing (NLP) elements such as topic modelling to reveal key themes in the data. Integrating topics and sentiment with all available data about the authors of the text, such as CRM data, enables us to drill down into the data and identify important issues and opportunities. Only then will sentiment analysis deliver meaningful and actionable insights.
Frank Hedler is director of Advanced Analytics at Simpson Carpenter
1 Comment
Maria, FlexMR
9 years ago
Interesting read, thanks Frank. It's a fascinating area that has lots of appeal for insight teams eager to make qual data into something more meaningful.
Like Reply Report