Case Study Abstracts
Case Study 1:
Forging Further with Foliar Foresight: Perception and Planning for Autonomous Navigation through Vegetated Terrains
Miss. State: Chris Goodin, Marc Moore, Riku Kikuta
U-M: Tulga Ersal, Bogdan Epureanu, James Baxter, Rishitha Paga, Junsik Eom
Industry: Brittney English (Dynetics), Andrew Kwas, Timothy Morris (Northrop Grumman), Chenyu Yi (Mercedes-Benz)
GVSC: Mike Cole, Paramsothy Jayakumar
Autonomous off-road vehicles face the critical challenge of navigating complex terrains characterized by varying topologies and vegetation. This integration effort enhances autonomous off-road vehicle mobility by developing an advanced trajectory planner that incorporates both terrain topology and vegetation resistance into its navigation strategies.
This work builds on two ongoing ARC projects. Project 1.40 has introduced a touch-based sensor on an MRZR platform to characterize vegetation override forces. This integration effort extends this work by training a machine learning model to map these forces using aerial imagery. Meanwhile, Project 1.41 has developed a trajectory planner for highly-mobile navigation on difficult terrains. This integration effort further extends this planner to include considerations for vegetation resistance.
The newly integrated framework is implemented and tested on Mississippi State University's MRZR and proving grounds. Simulation studies provide additional validation. Results demonstrate a significant improvement in autonomous off-road mobility compared to planners that ignore vegetation resistance and rugged terrain topology. Thus, this integration effort demonstrates unprecedented capabilities for autonomous off-road mobility.
Case Study 2:
Towards Robust Behavioral Cloning of Autonomous Driving Using Low-Rank Tensor Decomposition Embeddings
Prof. Wing-Yue Geoffrey Louie1, Prof. Alex Gorodetsky2, Prof. Shravan Veerapaneni2, Dr. David Gorsich3, Dr. Mark Brudnak3, Doruk Aksoy2, Motaz AbuHijleh1, Sean Dallas1, Mihir Vador2, Pranav Bahl2
1 Oakland U.; 2 U. of Michigan; 3 GVSC
Behavioral cloning has been widely used to obtain strategies for autonomous driving based on expert demonstrations in virtual environments. However, there are several computational challenges to benefiting from these approaches at scale. Human experts typically consider the full scene context, which includes physical interpretation of depth and objects in the environment. This quantity of data, available from simulated camera and LiDAR data, is challenging to directly provide to a behavioral cloning agent due to its high dimensionality. Second, is the challenge arising from a lack of robustness of the autonomy model to scenarios outside of its training data — necessitating large scale training or the identification of foundational features.
Oakland University has been working on virtual experimentation involving the generation of high-fidelity autonomous vehicle simulation environments and creating highly efficient behavioral cloning models using low-dimensional hand-chosen features only available in simulated environments. Meanwhile, University of Michigan has been working on extracting latent features from large-scale, high-dimensional data using low-rank tensor decomposition algorithms and using those latent features for behavioral cloning in video games.
This integration project aims to merge these two ideas to enable behavioral cloning directly from the large scale high-dimensional sensor data to enable higher performing autonomous agents. We will demonstrate our ability to highly compress the relevant visual and LiDAR data, and our ability to use this data for behavioral cloning. We will then investigate and compare the performance and robustness of agents trained with the low-dimensional data and the original high-dimensional data.