FEATURE1 September 2020

Tackling toxicity: the impact of algorithms at work

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

AI Features Impact

Artificial intelligence could become a tool in the arsenal of businesses looking to prevent employee misconduct and understand more about workplace culture – but there is a fine balance to be struck. By Katie McQuater

Tackling-tox-2020

Since the #MeToo movement prompted a global conversation about sexual harassment and uncovered multiple stories of workplace misconduct, the issue of how we behave at work is under more scrutiny than ever.

At the same time, algorithms have become an inexorable part of modern life, and now – as well as serving us recommendations on what to watch, listen to, or eat – they are making an impact on the workplace and how people are managed.

With businesses experimenting with new applications of artificial intelligence (AI) to hire recruits and monitor performance, perhaps it was inevitable that the next frontier would be managing people and their behaviour. Advances in natural language processing have given rise to technologies designed to monitor written exchanges between employees, giving firms insights on how staff interact and detecting signs of bullying or harassment.

While the use of AI to pick up harassment in emails appears to be nascent – according to the Guardian, such technology is being used by law firms in London – it could become more commonplace as organisations try to stay one step ahead of misconduct and higher numbers of staff work remotely. However, applying technology to such a sensitive human problem raises questions over biases, culture, accuracy and trust.

Transparency

Just getting the data to make AI technologies robust in the first place could be difficult, according to Prasanna Tambe, an associate professor at the University of Pennsylvania’s Wharton School, and co-author of a paper on applying AI within human resources management.

“For these systems to work well, you need to have a lot of instances in your data. If you’re talking about one organisation, when you get down to harassment or toxic employees, you don’t have a lot of examples. You need to have a lot of cases – not just one or two, or even five or 10.”
Companies using such tools would also need to ensure they are transparent about how they are used and what decisions are based on. There are a range of AI technologies making recommendations based on a number of factors, and while some are transparent in how they reach decisions, others are “considerably more opaque”, says Tambe.

“If you’re trying to use an opaque system to predict something that’s going to affect an employee’s career trajectory, there’s a lot more to contend with. Those limitations become much more significant than they have in the past.”

The transparency question has implications for the professionals overseeing the systems – the tech in itself does not solve the problem, it merely flags it. As such, it’s more useful to think of such systems as tools for HR managers rather than a substitute for human decision-making. Tambe adds: “We’re not at the stage where it’s OK for employees to get direction from an algorithmic bot, so to speak. So if an AI system identifies something as being an issue, what is the oversight on that going to be; where is the human element?”

Cultural nuance

Human decision-making is also necessary to train the technology in the first instance, but defining what constitutes harassment has potentially negative cultural ramifications for companies, both in terms of the design and people’s reactions to it.

Heledd Straker, a workforce futurist at PA Consulting, says: “Bullying and harassment is entirely cultural and all to do with language, and language changes all the time because it reflects the perceived realities of certain groups. Also, people behave differently if they think they’re being watched, so you could end up with [harassers] becoming less ‘visible’.”

Lack of diversity among developers means that everyday cultural nuances can be lost when a system that is optimised to achieve one goal is put into practice in the real world – for example, in 2018, a machine-learning hiring tool being developed by Amazon was found to discriminate against women. Straker uses the example of disabled people.

“Because of how high unemployment is for people with disabilities, there is no training data for AI, so it doesn’t pick them up and, therefore, AI reinforces the view that disabled people are less able to work,” she says.

“Every social group has its own discourse; words and phrases that are shortcuts. It is quite hazardous if you accidentally plug in the phrases for your own people into a technology that’s meant for other people. Technologists’ views don’t necessarily reflect wider society. It’s often the white male technologists’ view – which isn’t necessarily bad, but it’s just one perspective.”

AI could stifle natural communication and collaboration between employees, says Vijay Mistry, head of employee research at Harris Interactive. “You, as an individual, have established the boundaries with your colleagues about what those lines are, and for us it’s acceptable, but for a machine-based technology it might not be acceptable.”

Mistry understands why HR might look to AI to optimise processes and streamline some of the more burdensome aspects of the role. However, he’s sceptical of companies’ ability to employ such systems effectively without significant hurdles.

“At the moment, they’re struggling with just how to use big data and advanced analytics,” he says. “Conversations around AI are focused on operations, rather than serious issues such as getting to the root, underlying cause of issues such as misconduct.

“Human resources, by its very name, is about the human element and approach. Every employee is different. I don’t believe AI is at a point where it’s capable of dealing with the volume and complexity of human behaviour.”

Trade-offs

From the perspective of employee insight, could analysing how staff communicate help organisations to be better prepared for the future? Mistry cites the financial crisis as one example of an event that could have potentially been prevented had organisations had insight from AI to find out where people were lending when they shouldn’t have been. “There is a benefit, but it needs to be traded off with the risk to employee culture,” he says.

Determining the true impact in the context of people management requires ongoing measurement and evaluation, says Straker, who co-authored a paper with the Chartered Institute of Personnel and Development that found HR is the business function least likely to be involved in decision-making about investments in AI.

“Essentially, there was no voice of the employee – and given that the vast majority of AI in the workforce will impact people, that needs to be looked at. It’s currently on tasks, processes and technology, but it needs to have more of a focus on the people.”

This article was first published in the July 2020 issue of Impact.

0 Comments