Put your trust in an automated car

FEATURE4 December 2019

Put your trust in an automated car

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

Automotive Features Impact

Driverless cars require humans to hand over control and trust their vehicles. But this trust depends on the timing of information delivered by the cars, as Jane Bainbridge reports

If autonomous vehicles (AVs) are going to be the future of driving, as people predict, then we’re going to have to learn to trust them. Driverless cars are hailed in some quarters as offering more fuel-efficient driving, reducing accidents and being key to more holistic transport strategies, as they sense the surrounding environment with no – or minimal – driver involvement.

Handing over control to a vehicle, however, is a significant shift in people’s behaviour, and requires drivers to believe their cars are going to make the safest and correct choices.

In light of this, researchers at the University of Michigan have studied how the vehicle voice prompt affects people’s trust in an AV. Lionel Robert, associate professor, School of Information, at the university, says he was drawn to this research because – having previously studied trust in humans – he realised it was based on expectations.

“We expect people to behave in a certain way, and when they don’t live up to those expectations, we either lose trust in them, or we have to explain why they didn’t do what we expected them to do,” he says.

“With autonomous vehicles, driving is dynamic, so the car has to make decisions independent of the human, and people may think ‘what just happened there?’. If you’re in an AV and it comes to a crosswalk, and there’s nobody around and it stops for no reason, it waits five seconds and then drives away – at that point, you don’t know what just happened. But if you’re told it’s programmed to stop anytime anyone is near the crosswalk or sidewalk, then you get it.”

So, how do different details of voice explanation affect trust? Uncertainty reduction theory (URT) asserts that people seek to reduce uncertainty through information. As uncertainty in someone increases, trust in that person decreases. The researchers used URT to establish three hypotheses:

    • AVs that give explanations have higher driver trust and preference, as well as lower driver anxiety and mental workload, than AVs that don’t give explanations
    • Giving an explanation before acting would improve trust and lower anxiety and mental workload
    • AVs with less autonomy, where drivers had options on its actions, would have higher trust, preference and mental workload, but lower driver anxiety than AVs that acted without asking permission.

These were tested in a controlled lab setting, using a high-fidelity driving simulator with 32 people ( 11 women), using four different conditions – no explanation; explanation given beforehand; explanation given after the AV activated; and the option for the driver to approve/disapprove the AV’s action after hearing the explanation.

The researchers looked at four outcome variables: trust; preference for AV; anxiety; and mental workload. They found that when an explanation was given before the vehicle acted, people trusted it more and showed a preference for the AV – although there was no difference in anxiety and mental load.

Robert says they wanted to show that giving explanations had an impact, but that it was better when they were given before the action – and better still, if people were given a choice. But he says the team were in for a couple of surprises with the findings.

“It turns out that, when you give them information afterwards, it’s not better than giving them no information at all. We also thought that giving them a choice would be better than telling them beforehand – it wasn’t.”

Robert thinks car manufacturers can learn from the research.“The thing with instructions beforehand is that it requires a level of computational power to predict, and a car can’t always do that. So there’ll be times when a car must make a decision and tell you later. When that happens enough times, there is less trust,” he says.

The risk is that people think there’s a problem with the car. “We know from literature that we tend to assume the worst when we don’t know something. So to what degree can you fill that void with correct information?”

Trust, anxiety and mental workload were all measured on a self-reported survey. Trust and anxiety were on a scale of one to seven, while the researchers used the Nasa TLX cognitive task load – an assessment tool to rate mental workload.

“We are trying to find other objective measures of trust. It’s hard, because people trust from how they feel,” says Robert.

But how accurate is the simulator compared with being on the road? Are there any factors – in terms of the difference between a simulator and real-world experience – that need to be considered?

“I would say no, because we had very high-fidelity simulation. It might be different if someone’s life was on the line – so, if we altered the degree of safety. In this case, we didn’t make the car driver unsafe, so they were never threatened. We just wanted to see what set-up people prefer,” says Roberts.

In terms of autonomy, the participants were given level 4 AVs – that is, they are considered driverless in certain conditions. Participants could take over control if they wanted to, but they didn’t have to, says Robert – and, in this study, no-one ever did.

“When you introduce the idea of takeover, and you want to test that, it crowds other things,” he adds.

“Some people will say, if there’s a human driver, then the vehicle isn’t really autonomous; a lot of people will say a Tesla is more of an advanced driving assistance system than an autonomous system.

“When you go back to level 2 [has at least two automated functions/occasional self-driving] or 3 [can handle dynamic driving tasks, but might need intervention/limited self-driving] a lot of things become different.”

What level of autonomy the future of driving will involve remains to be seen, but Robert is already extending his research. The next stop is comparing modes of explanation, with current tests looking at voice compared with text.

1 Comment

5 years ago

Thanks for sharing - really interesting topic of trust, communication and expectation at the human-machine interface.

Like Report