Skip to main content
arc logo
Automotive Research Center
hero image

Case Study Abstracts

Case Study 1:
Through the Virtual Lens: Improving Human-Robot Team Performance via Training Reviews with a Virtual Spectator Interface

Quad Members: Dr. Dawn Tilbury, Dr. Lionel Robert, Hongjiao Qiang, Arsha Ali (U. of Michigan); Dr. Wing-Yue Geoffrey Louie, Motaz AbuHijleh, Sean Dallas (Oakland U.); Dr. Kayla Riegner, Dr. Jon Smereka, Dr. Ben Haynes, Dr. Mark Brudnak, Dr. Gregory Pappas, Dr. Denise Rizzo (GVSC); Dr. Samantha Dubrow (MITRE); Dr. Lilia Moshkina (May Mobility)

        In the military, a training review is often used to examine team member behavior and team outcomes. By reviewing what did and what did not go well and the reasons why, a training review has the potential to improve team performance in the future. Research has also shown that team performance can improve when the human has more situation awareness (SA) and SA is needed to accomplish team goals. We hypothesize that training reviews can improve future SA, and thereby also improve team performance. To test this hypothesis, this project integrates a virtual spectator interface (Project 2.A92) with a two-robot unmanned ground vehicle reconnaissance mission simulation (Project 2.17) to support training reviews for improving SA and human-robot team performance. Three types of training reviews varying in format (spectator interface, screen recording, verbal description) were tested in a between-subjects user experiment at two locations. The results provide implications on the design of training reviews, including what information may be most valuable for soldiers and the best format to present information.


Case Study 2:
What's a Little Dirt in the Eye? Detection and Adaptive Planning for Off-Road Autonomous Navigation under Soiled Sensors

Quad Members: Dr. Daniel Carruth, Dr. Chris Goodin, Dr. Christopher Hudson, Nicholas Harvel, Riku Kikuta, Jacob Kutch (Mississippi State U.); Dr. Tulga Ersal, Siyuan Yu, Congkai Shen (U. of Michigan); Dr. Paramsothy Jayakumar, Michael Cole (GVSC); Dr. Hui Zhou, Dr. Oliver Jeromin (Rivian); Dr. Nicholas Gaul (RAMDO Solutions); Akshar Tandon (Tesla)

         Reliable navigation by off-road autonomous ground vehicles faces a critical hurdle: continuing safe operation when sensor data is compromised by environmental factors such as dirt, mud, or water on sensors. This integration effort tackles this challenge by leveraging and extending two key ARC projects to combine robust perception with adaptive trajectory planning to ensure continuous and reliable autonomous operation under sensor soiling.

         For Project 1.38, Mississippi State University (MSU) has uses neural networks to address partial occlusion of camera sensors by dirt or mud. The system detects the presence of occlusions, segments the occlusion from the image, diagnoses the potential effects on system functions, and attempts to fill in small occlusions. The integration effort extends the detection and segmentation to LIDAR and estimates the reduction in effective field of view for the LIDAR system.

         For Project 1.A89, the University of Michigan (UM) developed trajectory planning algorithms for pushing autonomous vehicles to their limits on uncertain terrains with soft soils. The integration effort extends this work to incorporate the estimated occluded field of view of the LIDAR as input and dynamically adapts the vehicle's trajectory to bring the occluded areas into the LIDAR's field of view while still ensuring safe traversal of the environment.

         The resulting framework has been integrated with the open-source NATO autonomy stack, based on Mississippi State University's NATURE stack from ARC Project 1.31. Simulated and physical trials with Polaris MRZRs at both MSU and UM demonstrate significant navigational robustness improvements, ensuring reliable vehicle operation even when sensors are compromised. This synergy between perception and planning enables more resilient autonomous vehicle operation in challenging environments.