Skip to main content
arc logo
Automotive Research Center
hero image

Building Resilience and Safety in Autonomous, Networked Teams

February 11, 2020
Multi-agent system
Multi-agent system's resilient communication protocol allows followers to receive and accept correct instructions despite one malicious agent in the group.

A series of robots are moving forward in formation through a dusty, alien landscape. Suddenly, one robot changes direction and moves away from the team. Then another breaks formation and another. The robot division has been compromised. The operation has failed.

This scenario could be the introductory paragraph of a science fiction novel, but an army of autonomous ground vehicles is becoming more fact than fiction. This technology is even popping up in the civilian sphere, as self-driving, autonomous cars are increasing in frequency. Engineers are now exploring ways to thwart errors, both unintentional and malicious, that could compromise the safety and resilience of these networked systems.

“We do not focus on how the adversary will penetrate the system, but rather on the consequences of the adversarial actions,” begins Dimitra Panagou, assistant professor of Aerospace Engineering at the University of Michigan. “We know that the network is vulnerable, and try to understand how we can connect the agents with each other in [a way] so that the bad information and effect from the adversaries can be filtered out.”

Resilience is not a new concept in networked systems, but most work in this arena has been completed on static networks, like power systems. Panagou’s work is unique because she is working with a system of autonomous vehicles in motion.

In particular, she is studying how misinformation in a compromised system is passed along through the network. The team is looking for ways to identify and filter out the misinformation to ensure that the majority of the robots in the network remain ‘good team members, while not violating safety of the team.

“We want to make sure that the effect of the bad information as long as it stays in the network does not cause any safety issues, like collisions with each other, with the environment or [deviation from the] route to compromise the mission,” Panagou said.

The team used computer simulations to model a system of autonomous, networked ground vehicles. They attacked the vehicles using different techniques and evaluated the response of different algorithms programed into the robots. They compared the model simulations to field experiments with real robots. In the field studies, seven ground robots programmed to move in a coordinated fashion were able to deflect the misinformation passing through the compromised network.

“Despite one robot not playing nicely, the rest of the robots were able to maintain formation even though they didn’t’t know who the bad agents were,” Panagou said. “The reason the [robots] were able to continue is because they were running the resilient algorithm we developed.”

While this work has clear military applications, Panagou believes it could also help regular people by making self-driving cars and connected vehicles (aerial and ground) safer.

“It is very exciting to see how the abstract models and algorithms that we have [created] work well during practical implementation in real-world applications,” Panagou said.

This research received funding from the Automotive Research Center, a University of Michigan-based U.S. Army Center of Excellence that aims to advance the technology of high-fidelity simulation of military and civilian ground vehicles.


Stacy W. Kish