Mobile phone use lots_crop

OPINION26 September 2019

Lie to me

Mobile Opinion UK

Nick Bonney and Katrien Gunn discuss the dangers of taking claimed behaviour at face value.

The use and attitude survey has long been a staple of the market research industry, looking to understand category dynamics and the associated opportunities.

While this kind of research is in decline as brands make more use of their own ‘first party’ data, the principle of asking consumers to record what they’ve done remains central to most pieces of research. From the brands they’ve bought to the shops they bought them in; the adverts they’ve watched and the channels they watched them on – wherever you look, claimed behaviour is still front and centre in a large chunk of most research projects.

A lot of the focus on data quality has been on eliminating those who are abusing the system from the data – the serial respondents, the survey speeders or the deliberate fraudsters. But what if we simply aren’t that good at remembering stuff? What if the record we store in our head – our reality – is significantly different from what took place.

To put this to the test we used the incling platform to allow 18 consumers to talk to us about their smartphone use over a two week period and at the end of each week we also captured their actual behavioural data from the Apple Screentime app to examine the differences.

To close the loop, we then exposed people to the data they’d shared and asked them to comment on what surprised them vs. their own record of events. This approach revealed key challenges with claimed behavioural data.

With a range of user types in our sample, the spread of time spent each day on a smart phone ranged from just 32 minutes through to nearly 6 hours at the other end of the spectrum.

However, what all our users had in common was that the time they spent on their phone often (far) exceeded what they expected.


When confronted with their data most users sought to look for external factors which had led to this increased use e.g. exceptional train journeys and specific life events rather than accepting that their original estimate may have been incorrect.

The concept of the availability heuristic (our tendency to overstate events which generate a stronger emotional reaction) has long been discussed and it was interesting to see this come to life in the context of every day behaviour patterns.

Before being faced with their own data, users tended to overstate ‘useful’ use cases (e.g. work e-mails or shopping) but for all but one of our respondents social networking was the way they spent most time on their phone. It wasn’t just the amount of time that consumers tended to misjudge but what they spent their time doing.

So, what can we do to address these challenges? Undoubtedly, many best practice principles of questionnaire design (e.g. asking participants on a specific event or time period) are as important now as they have ever been. However, today’s data-rich landscape opens up even richer possibilities to reduce the reliance on simply asking consumers what they’ve done. For example, matching respondents against transactional data or using passive metering for a small sub sample allows us to calibrate claimed vs. actual behaviour.

But the risk is that, even then, we can end up with a one dimensional read on what drives behaviour.

With community platforms now offering a diverse range of tools, we can capture a more longitudinal view of drivers of behaviour but crucially, also explore the motivations behind it.

While seeing the difference between claimed and actual behaviour was interesting, it was only by applying a wide range of techniques that we were truly able to understand. A combination of  both straightforward and more projective techniques (e.g. drawing the relationship they have with their smartphone), delivered in both private one-on-one and public discussion forum formats, allowed us to unpick the differences we saw between what they said and what they actually did.

These tools also allow us to get closer to the interaction we’re trying to understand. Asking consumers to reflect on what they’ve just done and using more observational tools such as diary tools, video blogs or screen-casts not only allow us to see the world through the consumer’s eyes but also to attempt to eliminate the potential response bias by capturing behavioural information ‘in the moment’.

With the speed of decision-making seemingly getting ever faster and the number of choices rising exponentially, the risk is that these errors in claimed behaviour will only continue to widen.

However, there’s no excuse to blindly accept these differences when we finally have the tools at our disposal to genuinely explore and understand them.

Nick Bonney is founder of Deep Blue Thinking and Katrien Gunn is a director of Incling

1 Comment

5 years ago

This shouldn't be new news. Mixing claimed and observed behaviour by linking survey and database data has been around for quite a number of years. The difference can be illuminating - and not just negatively - for instance if someone claims to have purchased the premium brand, when the reality is a cheaper brand, what would your interpretation be? Recently, we've been involved with experiments looking at how e-commerce behaviour - as in what was searched for and route to purchase - matches up with customer's recollection of their purchase journey and motivations. Blending real and claimed data in this way turns out to be more enlightening than just using a single source.

Like Report