FEATURE16 April 2015

Everything counts

Features UK

We recently completed a study on technology and the screens we interact with daily. Not unusual in itself, but with the PixelLife study, we aimed to look at everything – an inordinately ambitious undertaking.

Res_4013186_blinker

This got us thinking more generally about the scope and ambition of quantitative studies. If knowledge is power, then knowing everything is, well, very powerful. But no quantitative study can really find out about everything – it’s plainly impossible. With respondent engagement time limited, surveys understandably focus on the key issues of the day.

By definition a narrow focus means the wider picture and the wider context are out of sight. Wouldn’t it be wonderful if we could find out everything about the market, take the blinkers off, and assess how the narrow market of interest fits into a wider whole?

But when responding to a brief with a finite budget, it would be a brave agency that suggested exploring areas beyond the core segments. Longitudinal studies, triangulation of datasets, and long-running communities offer a crafty patchwork solution to extending insights beyond a single study’s narrow focus, but I’m talking here of the ambition and scope of standalone quantitative studies. I’d like to share the experience of attempting to undertake a study that strived to cover the elusive “everything” context.

Our ambition was to undertake a technology study to understand the usage and profile of users across all possible screens we look at today – from smartphones to tablets, smart TVs to desktops, laptops to hybrids. We wanted to look at every activity on every screen, but also to assess screen-specific satisfaction levels – by activity, by device, by brand/service and by category.

So, to give just one of thousands of examples, we wanted to know about satisfaction with streaming TV shows in general, on a tablet, and specifically with Amazon Prime or Netflix on a tablet, and all combinations thereof. In short, we wanted a mix of broad usage insights coupled with the ability to drill down to ultra-specific instances of satisfaction.

We covered everything, including browsing, streaming, work, gaming, dating, shopping, sports, art, design, news, messaging, social media, gambling, security, banking, eBooks, maps, fitness… the list goes on. We might not have encompassed absolutely everything, but it’s a broad set of behaviours that covers a large proportion of our collective screen-time, and that’s probably as close to everything as we need.

With clients across all sectors, this study allowed us to cross-reference users and their profile against many other activities – blinkers completely removed. The benchmarking potential of this kind of study is as wide as you could possibly want – and this is a huge part of understanding success.

There are, however, some real challenges in attempting something as broad and ambitious a study as this:

Scope

A broad understanding and appreciation of the ‘everything’ that’s needed – there are no second chances with standalone studies, so deciding what to include and exclude is key.

Length

There’s clearly a trade-off between survey length and including everything. We limited ourselves to a 25-minute maximum survey length, but there is a downside to setting such a limit – I’ll expand later.

Technical issues

A broad study will often involve a whole set of technical issues. In the case of PixelLife, we had to adjust the script to suit the device, to ensure all the requisite inclusions and exclusions were in place, making everything relevant and logical by device, activity and service. This was time-consuming and cumbersome – but absolutely necessary.

Sample size

In covering all activities across all screens and devices, and all the major services used, a large base size was essential to ensure coverage of the more niche brands used.

Perhaps the complexity of technology in our study proved more challenging than an equally broad study into leisure, entertainment or media might have been. We felt downright proud that we managed to include all activities; usage, profiling, services and satisfaction scores in a single study. But that’s all we could cover. There wasn’t space to include a single “why?” question in the survey. It was a question of breadth versus depth.

So we know exactly what everyone does with their screen-time, how it cross-references other activities, and how happy people are when using specific services on specific devices – but we have no information on why they do it, or how satisfied they are when doing it.

This is where follow-up, narrower studies certainly have a role to play. Coupling the narrow focus – the mainstay of most research studies – with the wider context of the more ambitious inclusive insights is the best possible combination.

If only all clients had the budget to do both!

Steve Evans is research director at Harris Interactive UK.

0 Comments