FEATURE15 January 2018

The human perspective

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

AI Features Impact People Technology UK

Human skills are more, not less, important as technology advances but businesses need to face up to some difficult questions about how technology is used, argues envisioner Dave Coplin. By Jane Simms.

Dave Coplin 1_crop

Dave Coplin doesn’t take himself too seriously. A Star Trek fan since the age of eight, he is, nearly 40 years later, “a proper grown-up nerd” complete with pony tail, beard and hipster spectacles. 

Until very recently, he also had an appropriately nerdy moniker – chief envisioning officer at Microsoft. “I always told my mum I’d be a CEO one day,” he jokes.

But we should take him very seriously indeed. An evangelist for the world-changing potential of technology since his university days, and still “a naive optimist”, he nevertheless has a number of concerns. One, we have become enslaved by technology rather than liberated by it. Two, we have abdicated too much responsibility to technology, expecting it to provide answers without us asking the right questions, and allowing many of our innate cognitive skills to atrophy in the process. Three, workplaces have not kept pace with technological advances, severely compromising our productivity. And four, attempts to blame technology for a range of societal ills, from bullying, to job losses, to potential Armageddon, threaten to forfeit the enormous prize that is on the table – including a cure for cancer and Alzheimer’s.

Grateful to Microsoft for “letting me raise my freak flag high” for 12 years, he left in September and set up his own consultancy, The Envisioners, to help organisations – businesses, government and even families – to address these challenges. And time is of the essence.

“I think we are heading into two decades of massive disruption that will be driven by artificial intelligence (AI) and machine learning, on the scale of the Industrial Revolution, if not greater,” predicts Coplin. 

“In the Industrial Revolution we automated manual processes, but this time it will be cognitive processes, so middle-class knowledge workers will be most affected. We have to prepare the younger generations for that world, and equip older ones who are caught up in the change with the skills to continue to be successful.”

Critically, we need to hone our human skills, which become ever more important as technology marches on, argues Coplin. “We need to blend the best of digital with the best of analogue, and use technology to extend our reach,” he says. We need to use technology more intelligently, including wresting back control of technologies that, whether we realise it or not, control us. 


Email is a classic example. A technology designed to help humans communicate is making us communicate in inhuman ways, points out Coplin. The rot set in with the arrival of email on Blackberrys in the late 1990s: “Workers became mentally absent from their homes and families." And, in the office, clearing the inbox has become a proxy for real work – no wonder we have problems with productivity, he says. Facebook and other social media have had a similar effect on the wider population. “One of the core human skills we need to develop is to understand, at any point in time, if the technology can help us or not, and if it can, great, fill your boots, and if it can’t, turn it off,” he says.

Our obsession with “filling in the nooks and crannies of dead time” with email or social media gives us no time for deep thought and reflection, he continues. We’ve also lost ‘the skill’ of being bored. “Boredom is the gateway to creativity and innovation. I experiment on my son. If we’re going on a long car journey, I might stop him from getting his technology out. He’ll whine for half an hour, and then he starts to sing, and tell jokes, and we have these wonderful conversations.”

Coplin also nails the lie of multi-tasking, which isn’t a skill at all, he says. The cognitive shifts we must make as we switch between different tasks and devices take a toll on our time and our brains, making us less effective and productive than when we concentrate on just one thing. More broadly, our reliance on technology makes us lazy thinkers. Research shows, for example, that our dependence on GPS is reducing our innate spatial awareness. 

He knows this through study, and he knows it through his own experience. “I had an epiphany a few years ago when I realised how damaging all this technology had been to my mental state. So, I do practise what I preach. I’m like a reformed smoker, which is horrible for everyone around me.” 

Coplin is an advocate of ‘humans plus machines’, but the question then becomes how we use them to extend our reach – where we draw the line between what we do and what they do. Einstein apparently never memorised his own phone number on the grounds that he could easily look it up. On that basis, with a world of data at our fingertips, why would we bother learning anything? 

“The greatest mathematicians of our time will tell you that 80% of the maths kids get taught is irrelevant,” says Coplin. “The only time my son is ever going to need to solve a quadratic equation is in the lesson on quadratic equations. Yes, he needs the basics, but what he really needs to know is how to manipulate technology to use those principles. We don’t need our kids to become computer scientists, but whatever vocation they choose, their technological competence will fundamentally affect their life success. At the moment, computer science is confined to the lab, but it should be done across the curriculum.”

Much of Coplin’s thinking is encapsulated in two slim volumes – Business Reimagined: Why work isn’t working and what you can do about it ( 2013 ) and The Rise of the Humans: How to outsmart the digital deluge ( 2014 ). The third will be on the broad theme of education and skills.

Part of the education task will be to counter
the “relentless negativity” associated with technological advances. Technology is a scapegoat, says Coplin: “It’s shit people who cause bullying, not social media. We have to
look at all the good that technology has to offer, and then work backwards to work out what to fix to achieve it.”

But he agrees that people do things online that they wouldn’t in the real world – or, at least, not to the same extent. “I think there is a bit of etiquette we need to learn as a society about how to behave online and how to respond to these things,” he says. 

Would legislation help? 

“That’s really hard to answer. I think we need more debate between the government, technology companies and society about the kind of behaviour we find acceptable, and the answer isn’t as clear cut as we’d like to think it is. In my search-engine days [at Bing] the government told us not to serve links to extremist videos. Fantastic, that makes sense. Draw a line for me and I can do that. But they wanted us to decide what was extreme. I thought, that’s not going to work: I’m a white middle-class boy from the East Midlands and my definition of what’s extreme is probably going to be different from someone else’s.” 


Nowhere is the need for moral and ethical governance more important than in the field of AI and machine learning – concepts that appear to both fascinate and terrify people. Why, given our love affair with the technology we carry around in our pockets, are people so afraid?

It’s the media and popular culture, says
Coplin. “Whenever you get a piece on robots,
I guarantee you there will be a picture of, or reference to, The Terminator. Likewise, with AI it will be 2001: A Space Odyssey. It always comes down to this adversarial battle between humans and machines – we seem to like it that way. But there are two things that people need to understand about AI – it’s neither artificial nor intelligent. It is just automation – the equivalent of me turning up at your weaving mill with a spinning jenny 250 years ago.”

Now, he explains, AI is ‘narrow’ – that is, the algorithms can only replicate within a given pattern that we teach them – but in a few decades, we will have ‘general’ AI, where the algorithms will start to learn and connect and prove themselves, and this is where the fear – that the machines will turn against us – arises. And although it’s a legitimate fear, “there are lots of very bright, important people working right now to try to prevent the Armageddon”.

The thing with AI is that you don’t programme an algorithm; you teach it like you do a child – “and if you teach it the wrong things, it learns
the wrong things”. Eliminating bias is a current challenge. “If you do a Google image search on ‘footballer’ you’ll have to scroll a long way down to find a woman. Is that the image of society
that we want?” 

Big data is another area fraught with moral and ethical challenges. The bigger the data – the more powerful the algorithms – the bigger the problems we can solve. In a business context, big data will allow companies to offer unprecedented value to their customers, but at the risk of invading their privacy. “To date, outside the tech companies, there hasn’t really been a debate about what companies should and shouldn’t do, because they’ve never had to think about these things before,” says Coplin. 

And it’s easy to unwittingly invade someone’s privacy, he warns. Researchers realised a few years ago that if you take two data sets of sufficient size, on a similar topic, although they are both anonymous, you can start to identify individuals within them. The concept of ‘differential privacy’ emerged to counter this – the statistical science of trying to learn as much as possible about a group while learning as little as possible about any individual in it. 

Given these risks, legislation such as next year’s General Data Protection Regulaton (GDPR) is an understandable mitigation strategy. But it closes doors that should be opening wider, argues Coplin. Increasingly, he points out, big data will be the biggest and most valuable source of information for organisations in all sectors, including health – but he argues that GDPR severely limits their ability to exploit that potential. 

There is an alternative, he suggests. “In today’s world, all the onus is on the consumer – you have to read the terms and conditions, you have to give permission for your data to be used, and so on. Why don’t we turn it round and instead put the onus on the organisations to behave responsibly with our data? We can kill them if they get it wrong, but let’s trust them to do innovative stuff with it, for our benefit. That’s challenging, because it’s not a risk-management perspective, and therefore represents a huge shift. But we have to move in that direction.”

But before we can do that, organisations themselves have some work to climb out of ‘creepy’ territory – we distrust the companies serving ads to us that we know are based on our data. The way for brands and advertisers to rebuild trust, says Coplin, is to be open and transparent with consumers about the data they have or want, and explicit about the value they will get in return. “Context will, increasingly, be king; as an advertiser I want to know who you are, where you are, what emotional state you’re in, what you did when you were here last time, what you’ve just done and what you’re going to do next. Armed with all that information, I could do some great stuff that would benefit everyone.” 

More generally, organisations need to switch from using technology to become more efficient, and instead use it to become more effective, claims Coplin. “Two hundred years of business history has taught us that good businesses focus on efficiency and processes, but you can be really efficient and deliver poor outcomes. What we need to focus on is doing the right things, and only then make sure that we’re doing them the right way.”

He finds it astonishing that most of us still follow a work pattern that was established when people congregated in mills or factories. Not only does technology allow us to break free of this ‘false schedule’, but it also allows us to ‘democratise’ data, flatten hierarchies and involve employees in problem solving – with commensurate rises in engagement. He points out most people have better technology at home than they do at work – and believes the reason organisations aren’t more innovative is because they are slaves to the notion of ‘productivity’ – “a cold economic equation of output per unit of input”.

He calls for a new definition of productivity for our digital age, and believes that if we measured ourselves on a broader range of criteria than just ‘paid work’ we might find the UK’s much lamented ‘productivity gap’ just disappears. But, here too, making the shift involves changing the culture towards one of empowerment and trust. 

If organisations don’t make this shift, the brave new world that artificial intelligence affords will be closed to them. It requires (real) intelligence, vision and bravery, and an understanding that rather than replacing humans, it frees them up to do more of what the technology can’t do – that is, be thoughtful, creative and innovative, and use judgement and empathy. 

One thing organisations need to think about, is how they will run themselves in a world where big data allows them to accurately predict what will happen, rather than what they think might happen. The technology already exists. A few years ago, Microsoft used predictive big data to forecast the winners of 15 out of 16 World Cup games, X-Factor and the Scottish referendum
(to within 2% of the public vote), among others.

Is Coplin a betting man? “No,” he laughs. “I should maybe start.” 

0 Comments