Skip to main content
arc logo
Automotive Research Center
hero image

ARC Team Explores Drivers’ Trust in Automated Cars

January 7, 2020
Driving simulator and NDRT setup
A participant in an ARC study sits at the wheel of a driving simulator. At right, the participant touches a screen involved in a search task.

Imagine a soldier on patrol who’s able to conduct a full surveillance of the environment or send mission-critical communications to headquarters all while sitting at the wheel of a vehicle that is safely driving itself at 70 mph.

In research that could one day make such car-and-driver teams a reality, University of Michigan engineers at its Automotive Research Center (ARC) are tackling one of the primary obstacles to such a scenario: drivers’ general lack of trust in automated driving systems.

“We are working to better understand the factors that go into establishing a driver’s trust in the automation. Then we can optimize the performance of the combined driver-vehicle team,” said Dawn Tilbury, principal investigator for the work.

The idea is to determine what helps create enough trust so that a driver is comfortable letting the car take over, enabling the human to complete key tasks unrelated to driving. At the same time, however, the driver shouldn’t be too trustful, as automated systems will always have limitations, especially in the dynamic environment associated with a war zone.

“So we want to make sure that drivers have the right level of trust. Not blind trust, but appropriate trust to rely on the automation when it’s fully capable of driving,” said Tilbury, a University of Michigan (U-M) professor of mechanical engineering and of electrical engineering and computer science.

Ultimately the work could help the Army reduce troop size by replacing one of the two soldiers currently involved in operating a vehicle (one drives while the other conducts surveillance or other key tasks).

Better Awareness

After some two years of research involving about 30 human subjects per experiment and a driving simulator, the researchers have begun to report results. For example, they have found that a driver’s trust increases when the vehicle gives him/her information that contributes to their situational awareness, or awareness of the surrounding environment.

“We discovered that when the vehicle gives the driver a heads-up about an upcoming situation and also suggests what to do, that goes a long way toward instilling a level of trust that allows the human to concentrate more fully on a secondary task,” said Lionel Robert, co-principal investigator for the work and a U-M associate professor of information in the School of Information. The specific example tested in the study involved a vehicle that’s stopped 50 feet ahead. Subjects developed the most trust in the automation when it not only alerted them to the car, but also suggested what they should do (if anything) about it. For example, the automation might tell the driver, “there’s a car stopped in your lane ahead, so you should take over the wheel [to maneuver around it].” That level of detail is important, the team found, as opposed to the automation giving no heads-up, or only alerting the driver that a car is stopped ahead.

The work was reported in March 2019 in the SAE International Journal of Connected and Autonomous Vehicles. In addition to Tilbury and Robert, co-authors include X. Jessie Yang, an assistant professor in the U-M Department of Industrial and Operations Engineering, and Luke Peterson, U-M MS 2018.

The researchers also presented the work at an ARC seminar in November, 2019.

Another Dimension

The researchers determined their subjects’ level of trust with tools including surveys, measurements of heart rate, and eye-tracking. They also measured what percentage of the time the subject was watching the road ahead versus doing a search task (the latter involved finding the letter Q in a display filled with many distracting letter “Os”).

And that brings another dimension to the work. “If we can estimate a driver’s trust from their behavior, we can potentially introduce some adaptive behaviors for the automation,” said Yang. For example, a driver’s behavior might indicate that he/she is over-trusting the automation, which could then alert the driver to bring that trust back into alignment.

“I think there needs to be a mix of automation,” said Tilbury. “When conditions are good, the car can take over tasks like maintaining speed and staying in the lane, but the driver should take over for unexpected situations.”

Work continues. For example, the researchers have been investigating the effects of additional risks like foggy weather and curvy roads on a driver’s trust.

Each experiment builds on its predecessors. Robert says he enjoys “not always knowing what’s next. As you go along in a study the problems become more interesting and definitely more important.”


Stacy W. Kish