Technical Talks Abstracts
Jump to Session 1.A | Session 1.B | Session 1.C | Session 2.A | Session 2.B | Session 2.C
Day 1: Tuesday June 17
Session 1.A: Perception and Planning
Project 1.38
Robust Perception in Adverse Conditions: Detecting, Diagnosing, and Recovering from Camera Occlusions
Nicholas Harvel, Jacob Kutch, Guled Liban, Cameron Verser, Daniel Carruth, Oliver Jeromin, Paramsothy Jayakumar
In off-road environments, camera sensors are frequently degraded by mud, dust, water spray, and snow. These occlusions distort or obscure imagery and reduce autonomous system performance. While prior published work has focused on water-based occlusions, this project expanded to include dirt, mud, and mixed materials with varying water content and application methods. This talk presents our progress on a multi-faceted approach to detecting, diagnosing, and recovering from camera occlusions. We explored both a Trans U-Net model and a SegFormer-based architecture for segmentation. We developed diagnostic tools that compare occlusion masks to spatial priors to assess functional impact of occlusions. We use image-to-image translation models (e.g., pix2pix) to attempt to recover occluded imagery and restore downstream perception.
Project 1.39
Adaptive and Efficient Perception for Autonomous Ground Vehicles Operating in Highly Stochastic Environments under Sensing Uncertainties
Junyao Wang, Luke Chen, Jonathon Smereka, Pramod Khargonekar, Mohammad Al Faruque
Autonomous vehicles rely on robust and efficient perception systems to operate safely under diverse and challenging conditions. This talk summarizes our recent efforts in adaptive multimodal sensor fusion for AV perception, with a focus on how, when, and why to fuse heterogeneous sensor data such as camera, LiDAR, and radar. We present a series of methodologies and frameworks developed to address these questions: (1) HydraFusion, which dynamically adapts the fusion strategy based on sensing context for optimized energy consumption and lower latency; (2) DivFuse, which explores the role of diversity-aware ensemble fusion in improving robustness; (3) HyperDUM, a prototype-based deterministic uncertainty quantification method leveraging hyperdimensional computing for real-time performance; and (4) CRUISE, which incorporates vision-language models (VLMs) to enhance contextual reasoning and feature reliability under uncertainty. Together, these contributions offer a principled and efficient pathway toward resilient perception in autonomous systems.
Project 1.40
Touch-based Sensing for Evaluating Vegetation in Complex Navigation Environments
Chris Goodin, Marc N. Moore, Ethan Salmon, Riku Kikuta (Miss State); Michael P. Cole, Paramsothy Jayakumar (GVSC); Brittney English (Dynetics)
Autonomous passenger-sized wheeled and tracked vehicles navigating in off-road terrain often encounter vegetation along their desired trajectory. Vehicles of this size may be able to drive over (override) small vegetation with little impact to mobility. It may also be possible to avoid the vegetation and take an alternate path. While human drivers weigh many competing factors (e.g. the discomfort of swerving to avoid small vegetation, the jerk associated with hitting medium sized vegetation, risk of damage to the vehicle, etc.) in real time to make navigation decisions around vegetation, autonomous drivers often struggle to make reasonable decisions about navigating vegetation in off-road environments. Up until now, autonomous vehicles lacked a vehicle-independent method for estimating the override resistance of vegetation from lidar and camera data. In the work, we show the results of our recent direct measurements of override forces on medium-sized passenger vehicles navigating through clumps of small vegetation using an integrated pushbar system. These measurements are the first of their kind for vehicles <2000 lbs. We use sensor fusion and machine learning to develop a predictive model for vegetation override resistance that uses fused monocular camera and lidar data. We also develop a convolutional neural network (CNN) that can predict override force based on aerial RGB imagery.
Project 1.41
Resilient Trajectory Planning for Extreme Mobility on Challenging Slopes
Tulga Ersal (PI), Bogdan Epureanu (Co-PI), James Baxter (GSRA) (UM); Paramsothy Jayakumar (GVSC); Chenyu Yi (Mercedes-Benz); Andrew Kwas, Timothy Morris (Northrop Grumman)
This talk presents a novel local trajectory planner, capable of controlling an autonomous off-road vehicle on steep, rugged terrain. Autonomous vehicles are currently unable to operate on steep off-road slopes. The steepness of the terrain necessitates high speeds, yet the roughness of the terrain makes such operation dangerous. Successful navigation requires pushing vehicles to their dynamic limits, which necessitates a complex coordination of speed and steering. This project addresses this challenge by using a novel model predictive control (MPC) formulation as the local trajectory planner. A new dynamical model for off-road vehicles on rough, non-planar terrain is used as the prediction model. Extreme mobility, including tire liftoff without rollover, is allowed through a new safety constraint. Real-time feasibility is achieved through parallelized GPGPU computation. An implementation of the planner is presented, and its ability to provide safe, extreme trajectories is studied through both simulated trials and full-scale physical experiments. The results show that the new planner achieves lower rollover rates and higher success rates than a state-of-the-art baseline across several challenging scenarios that push the vehicle to its mobility limits.
Project 1.42
Hybrid Learning-based Policy for High-Speed Autonomous Off-Road Navigation
Ram Vasudevan (PI), Elena Shrestha, Austin Buchan, Lucas Lymburner (UM); Paramsothy Jayakumar, Calvin Cheung (GVSC); Admed Mekky (MathWorks)
Autonomous off-road navigation poses significant challenges due to limited prior knowledge of operational environments (e.g., terrain types), complex vehicle-terrain interactions (e.g., tires on deformable surfaces), and the stochastic nature of real-world conditions. Traditional model-based approaches, like model predictive control (MPC), offer safety guarantees within a limited operational range but fail to generalize to dynamic and uncertain conditions. Data-driven methods, such as reinforcement learning (RL), adapt to changing conditions through interaction data but lack formal safety guarantees and often rely on black-box models. Hybrid policies aim to combine the strengths of both approaches by integrating model-based safety with the adaptability of learning-based methods. This ongoing effort aims to enable the use of data collected online to compute trajectories that safely navigate over an unknown terrain through an ensemble of expert policies. The hybrid policy is continuously updated throughout the mission by a separate gating network that is trained using RL. The actions are blended by assigning confidence measures to the outputs of each controller, allowing for adaptive weighting based on state conditions. The resulting hybrid controller and ensemble of experts are evaluated through real world experiments and physics-based simulation.
Session 1.B: Human-Autonomy Interactions
Project 2.17
Adapting Robot Communication to Estimated Situation Awareness Improves Performance in Human-Robot Teams
Dawn Tilbury (PI), Lionel Robert (PI), Arsha Ali, Wonse Jo (UM); Jonathon Smereka, Kayla Riegner (GVSC)
When humans supervise multiple semi-autonomous robots while also attending to their own tasks, they may lack the situation awareness needed to assist their robot teammates. There is a need to monitor the human’s situation awareness, so interventions can improve poor situation awareness. While prior work has developed situation awareness estimators, there is a need to use the output of situation awareness estimators for interventions aimed at improving situation awareness. We propose a system combining real-time situation awareness estimation with adaptive robot communication interventions to enhance situation awareness. The estimator uses simple and interpretable logistic regression models that take inputs from both eye-tracking and behavioral measures. Cross-validation achieved an average accuracy of 74%. To test the effectiveness of the situation awareness system, we conducted a between-subjects experiment where 43 subjects teamed with two semi-autonomous robots while performing a secondary task. Robots adapted their communication based on estimated situation awareness and contextual factors. The contribution of this work demonstrates how adaptive robot communication based on estimated situation awareness improves both situation awareness and performance.
Project 2.18
Task Allocation and Communication Strategies in Human-AI Teaming: An Empirical Investigation Using Dual-Task Simulation
Sang-Hwan Kim (PI), Shruthi Venkatesha Murthy (GSRA), (UM - Dearborn)
This research explores optimal Human-Autonomy Teams (HAT) in dynamic dual-task environments, where AI supports secondary tasks. We designed a simulation featuring a primary skill-based motor task paired with a secondary rule-based decision-making task for target assessment. Initial experiments with human-human collaboration revealed natural task allocation and communication strategies. In the second year, we incorporated a pseudo-AI assistant, testing four task allocation strategies, two levels of AI decision-making transparency, and two communication modalities. Performance was assessed using metrics like speed, accuracy, workload, situation awareness, and trust. Results show that concise non-verbal information boosts performance when humans are more involved, whereas the take-over strategy - where control shifts between human and AI - harms situation awareness and trust due to mode transitions. These findings offer valuable insights for designing HAT systems in areas such as autonomous driving and complex mission execution.
Project 2.19
Who’s The Boss? Understanding Human-Autonomy in Shared Driving Applications
PIs: Kira Barton, Chris Vermillion (UM); Quad Members: Scott James (Applied Dynamics), Matthew Castanier (GVSC); Student Researchers: Aleksandra Dudek, Patrick Linford, Zihan Yu (UM)
Human-autonomy teaming in semi-autonomous vehicles relies on control strategies that account for variable human behavior. This work explores two parallel approaches: one uses game-theoretic modeling to understand human, autonomy, and joint human-autonomy interactions; the other investigates adapting the control strategy through an iterative learning control algorithm to account for human behavioral variations. A case study was used to create a preliminary data library mapping human behavior to specific level-k cognitive modeling informed strategies. Using the library, the mapping between human behavior and level-k theory is evaluated through the classification of interactions between human-autonomy teams under directed scenarios. A parallel case study was also conducted to evaluate the learning algorithm as applied to arbitration weights between human and autonomy control of a shared control vehicle along discrete segments of a closed-loop driving circuit. The results demonstrate that this personalized strategy significantly improves driving performance. Proposed future work is twofold: (i) use the foundational learning strategies to improve teaching efficiency in a simulation environment, and (ii) apply a game-theoretic approach to enhancing decision-making in multi-agent human-on-the-loop simulation environments.
Project 2.20
Multi-Directional Reliance and Effective Collaborative Human-Autonomy Teaming
Kenna Henkel, Audrey L. Aldridge, Christopher Hudson, Karl Smink, Andrew R. Buck, Derek T. Anderson, Daniel W. Carruth, Cindy L. Bethel (Miss State); Mary Quinn (Leidos); Victor Paul, Rachel Anderson, Drew Hoelscher (GVSC)
Effective collaboration in human-autonomous agent teams depends on both shared information and shared understanding. Imbalances in trust and reliance can lead to inefficient task performance, especially when teammates do not interpret tasks or capabilities in the same way. In the first year, this project used a configurable virtual testbed to investigate how levels of information availability influence trust and reliance between a human and two autonomous agents engaged in a search task. We manipulated three information-sharing conditions: no information, access to personal but not shared information, and full shared information. It was expected that increased information availability would sustain reliance and increase trust, but results reveal more nuanced and condition-dependent patterns in trust-reliance dynamics.
In year 2, a revised and extended study shifts focus from information availability to information interpretation, using the testbed to examine how mismatches in fundamental understanding of the tasks (lack of shared mental models) impact team dynamics. This phase will also evaluate the potential for our trust-reliance assessments to identify issues affecting team performance and trust.
Project 2.21
Human-Autonomy Collaboration for Escaping Local Minima
Contributors: Alia Gilbert, Gurnoor Kaur, Kevin Mendez, Yule Xie, Lionel Robert, and Dawn Tilbury
Quad Members: Dawn Tilbury (PI), Lionel P. Robert Jr. (Co-PI), Alia Gilbert (UM); Jon Smereka, Mark Brudnak (GVSC); Paul Rybski (Neya Systems); Ahmad Mekky (MathWorks)
Effective human supervision of autonomous robots in high-stakes scenarios requires efficient intervention, particularly when unmanned ground vehicles (UGVs) encounter local minima problems. This study investigates user interface designs to support human intervention in resolving such issues without a complete system takeover. We conducted a human-subjects experiment comparing two intervention methods: direct waypoint selection via mouse input and directional commands via arrow keys. Participants supervised two UGVs while simultaneously performing a secondary task, simulating real-world multitasking scenarios. Results demonstrate that mouse-based waypoint selection led to significantly more efficient UGV paths than arrow key controls and was also preferred by participants. Our findings contribute to the design of human-autonomy interfaces.
Session 1.C: Materials and Structures
Project 3.19
Intelligent ultrasound to adaptively control interfacial properties and reactions
Wei Lu, Bogdan Epureanu, Bogdan Popa, Max Nyffenegger (UM); Katie Sebeck, Matt Castanier (GVSC); Wayne Cai (General Motors)
Uncontrolled, non-uniform metallic growth on electrode surfaces inside the layered structure of modern batteries can cause thermal runaways, reduce battery life and limit charging speeds. Preventing dendrite growth has been a long-standing objective to battery fast-charging and enabling next-generation hybrid and fully electric vehicles (EVs) that can operate in tough, off-road and combat environments. Applying vibrational force to a battery can induce a flow in the liquid electrolyte that is contained within the porous layers of a battery. An electrolyte flow homogenizes the ion concentrations within the electrolyte under fast-charging conditions where typically there are areas of significant ion depletion, known as the concentration polarization. This homogenous ion concentration results in uniform electroplating on the electrode-electrolyte interface, thereby preventing dendritic growth. This technology has been demonstrated in the porous separator of a zinc electrochemical cell and works are being carried out towards real-world lithium-ion EV battery packs.
Project 3.23
Adaptive Structures with Embedded Autonomy for Advancing Ground Vehicles
Dr. Kon-Well Wang (PI, UM), Dr. Matt Castanier (U.S. Army GVSC Member), Dr. Jayanth Kudva (Industry Member, NextGen Aeronautics, Inc.), Dr. Ellen C. Lee (Industry Member, Ford Motor Company), Minh Nguyễn (PhD Student), Dr. Patrick Dorin (Postdoc)
The rapid advances in autonomous systems, such as automated vehicles, have demanded future adaptive structural and material systems to become even more intelligent. Such a need inspired us to advance from the conventional platform that relies mainly on add-on digital computers to achieve intelligence, to mechano-intelligence that embodies intelligence in the mechanical domain. Although interesting studies have attempted, there is a lack of a systematic foundation for constructing and integrating the various elements of mechano-intelligence, namely perception, learning, and decision-making, with sensory input and execution output for engineering functions. In this study, we lay down this foundation by harnessing the physical computation concept, advancing from mere computing to multifunctional mechano-intelligence. As exemplar testbeds, we constructed mechanically intelligent metastructures to achieve wave adaptation via physical computing, and uncover multiple engineering functions, ranging from adaptive noise and vibration controls, wave logic gates, to phononic communication. This research will pave the path to autonomous structures that would surpass the state of the art, with lower power consumption, more direct interactions, and much better survivability in harsh environment and under cyberattack.
Project 3.24; PIs: Valdevit, Apelian (UCI)
Additively manufactured all-metallic metamaterial solutions for protection of electronic systems in autonomous vehicles
Abstract TBA
Project 3.27; PI: Shravan Veerapaneni (UM)
Tackling complementarity problem at scale via continuous-variable quantum computing
Abstract TBA
Day 2: Wednesday June 18
Session 2.A: Digital Engineering and Multi-Agent Systems
Project 5.21
Multi-Phase Vector Symbolic Architectures for Distributed and Collective Intelligence in Multi-Agent Autonomous Systems
Shay Snyder, Ryan Shea, Andrew Capodieci, David Gorsich, Maryam Parsa
Real-time robotic systems face a fundamental trade-off between computational efficiency, energy consumption, and model determinism. World modeling, a key objective of many robotic systems, often begins with occupancy grid mapping (OGM), which discretizes the environment and assigns probabilities to attributes like occupancy and traversability. Traditional OGM methods are interpretable but computationally intensive, while neural methods improve efficiency but lack determinism and require domain-specific pretraining. In this work, we present VSA-OGM, a hyperdimensional OGM framework leveraging vector symbolic architectures and a novel use of Shannon entropy. VSA-OGM offers the stability of traditional models with the efficiency of neural methods—achieving up to 200× latency reductions over covariant traditional approaches, and 1.5× latency improvements over neural methods, without domain-specific pretraining. We compare VSA-OGM against Bayesian Hilbert Maps (BHM) in reinforcement learning-based path planning tasks across toy and F1-Tenth-inspired driving environments. VSA-OGM maintains comparable performance while improving generalization by ~47% in unseen scenarios. We also perform integration with Neya Systems' VISE simulator and introduce a multi-agent communication framework that extends VSA-OGM to adversarial, battlefield-like settings, reinforcing its suitability for real-world deployment.
Project 5.22
Detecting Elusive Faults in ROS2 Systems using a Multi-Objective Genetic Algorithm
UM: Nick Vlahopoulos (PI), Sean Hickey (GSRA); GVSC: Jonathon Smereka, Joseph Madak, Paul Bounker, John (Jack) Hartner
Robotic vehicles combine hardware capabilities with software driven operations. The significant need for verification and validation of the software used to operate robotic vehicles is well recognized. Due to the labor-intensive procedures, typically half of the entire software development lifecycle cost originates from testing efforts, while retesting software after upgrades accounts for eighty percent of the entire maintenance cost. In this research a Genetic Algorithm (GA) based test case generation method that systematically discovers faults by optimizing fault severity and input diversity while encouraging a balanced fault distribution is developed. The development targets software systems based on the Robot Operating System 2 (ROS2) library. The GA is used for determining the most effective artificial changes introduced in the data flow of the software that is being tested in order to generate errors. The developed capability discovers elusive faults with minimal human intervention. A lightweight simulator is used for demonstrating how the main elements of the new software testing process operate. On-going efforts towards demonstrating how the automated test capability operates with ARCS will also be discussed.
Project 5.23
A Hierarchical Transformer Approach to Automate Co-Design of Vehicle Attributes and their Team Operations
Bogdan Epureanu (PI). Anirudh Kanchi, Soham Purhoit (UM); Oleg Sapunkov, Anthony Dolan (GVSC); Arnold Martinez, Weston Murphy (Aberdeen Test Ctr); Matthew Foglesong (NAMC)
Creating AI-powered teams of autonomous vehicles involves conceptualization, designer input, intelligence empowerment, and performance experimentation. This process becomes especially challenging when developing multi-agent systems that require human-autonomy teaming due to complex interactions between heterogeneous agents and operation environments. Existing approaches either solve teaming strategies given pre-defined physical attributes of vehicles or find attributes that satisfy pre-defined strategies. In this research, we devise a more general approach to simultaneously co-design physical attributes of the agents in the team and their autonomous behaviors and teaming strategies. We present a hierarchical transformer approach that optimizes the team design at the higher level and the teaming strategies at the lower level. Furthermore, we highlight the effectiveness of the transformer framework independently on the team design problem, and the teaming strategy optimization. We demonstrate the co-design method in a multi-agent disaster-relief operation considering vehicle capabilities, adversaries, task dynamics and constraints on physical attributes. This effort aims to automate the co-design process that cost-effectively leverages available resources and assets for maximum teaming effectiveness, and it also fast adapts to the expectations and changes in operation environments.
Session 2.B: Human-Autonomy Interactions
Project 2.22
Preference-based Multi-agent Reinforcement Learning for Human-Robot Collaborative Bounding Overwatch
Shahil Shaik, Ryan Nanko, Yue Wang (Clemson University), Jon Smereka (GVSC)
Prior work in bounding overwatch has explored robotic "wingmen" assisting humans, which however focused on a forward-only perspective. In this project, we develop a human-robot teaming framework that leverages multi-robot capabilities for multi-viewpoint overwatch, merging human cognition with autonomous systems. We elicit human responses across multi-attribute and multi-alternative choice problems and learn the evolving preference dynamics during bounding overwatch based on Decision Field Theory (DFT). Then, the real-time adaptability and seamless coordination among humans and robots motivate our use of multi-agent reinforcement learning (MARL) for collaborative task allocation in human-centric bounding overwatch operations. To enable local information exchange due to the high communication overhead and limited bandwidth in multi-robot bounding overwatch, we propose a novel distributed MARL approach based on graph neural networks (GNNs) where agents share local observations, and the policy synthesizes augmented observations to approximate global states. The preference dynamics are then encoded as part of the MARL reward function, allowing robotic agents to better align with the human cognitive processes, and adapt more effectively in collaborative bounding overwatch tasks.
Project 2.23
Incremental tensor decompositions for discovering low-dimensional latent spaces and their applications for generative modeling
Doruk Aksoy, Pranav Bahl, Alex Gorodetsky (PI), Shravan Veerapaneni (PI) (UM)
In this talk, we will describe our recent development and deployment of incremental tensor decompositions for the purpose of enabling large scale data analysis. We begin by motivating the problem from the perspective of analyzing virtual gameplay to perform behavioral cloning of expert players. We will also highlight a motivating integration project with Oakland University whereby we seek to enable behavioral cloning for vehicles in a virtual game engine. Then we will describe the incremental algorithms that we have developed and compare their performance to existing state of the art --- highlighting improvements in both speed, compression quality, and generalization. Finally, we will describe how generative diffusion models can leverage this latent space and overview the benefits that may be achieved as a result.
Project 2.A98
Exploring the Influence of Embodiment on Data and Conversation Quality for Virtual Agent Interviewers
Absalat Getachew (Presenter), Andrea Macklem-Zabel, Wing-Yue Geoffrey Louie (OU); Mark Brudnak (GVSC)
Semi-structured interviews are commonly utilized to capture soldier perceptions after they have interacted with a newly developed technology such as a next generation combat vehicle. In the immersive simulation group at GVSC, up to 120 soldiers would need to be interviewed after a virtual experiment and, consequently, semi-structured interviews are held in groups as an After-Action Review. This is because one-on-one interviews are time-consuming and difficult to scale, especially when working with many participants. Our work explores how we can use virtual agents to automate the process of conducting simultaneous interviews to collect comprehensive and high-quality qualitative responses quickly as well as consistently. In our study, we looked at how the level of embodiment of a machine interviewer affects the interview experience and quality of responses. We were particularly interested in whether adding more human-like conversational features would improve the depth and clarity of answers, and whether that effect might be shaped by how participants perceived the system in terms of trust, usability, workload, engagement, and social presence.
Session 2.C: Terramechanics
Project 3.20
Modeling of a ground vehicle operating in shallow water
Hiroyuki Sugiyama (PI), Casey Harwood (Co-PI), Hiroki Yamashita, Michael Swafford, Daniel Matthew, Nathan Tison, Arkady Grunin, Juan E. Martin, Karl Leodler
Accurate prediction of vehicle mobility in shallow water, including water fording and river crossings, is crucial for effective operational planning and autonomous navigation in highly complex environments. However, due to the complexity in integrating computational fluid dynamics (CFD) and multibody dynamics (MBD) mobility solvers, only limited studies have been conducted on the vehicle-water interaction. Furthermore, there is little or no experimental data available describing the effects of shallow water on vehicle maneuverability, such as changes in traction and tire slips due to the hydrodynamic loads. Therefore, to enable quick and accurate prediction of vehicle mobility in shallow water, this study proposes (1) a new data-driven hydrodynamics model that can be integrated into off-road mobility solvers and (2) experiments in a model-scale environment for validating the proposed model. The predictive ability and computational time reduction achieved by the proposed vehicle-water interaction model are examined using several numerical examples, and the experiments and validation against the collected test data are presented.
Project 3.22
Terramechanics of Saturated Clays: Assessing Tire Performance Through Experimental and Numerical Approaches
PIs: Dr. Corina Sandu. Dr. Alba Yerro; Students: Varsha S Swamy, Chaitanya Sonalkar, Destiny Mason, Jasleen Bheora (Virginia Tech); Quad members: Dr. Katie Sebeck, Dr. David Gorsich (GVSC); Vinita Kumari (John Deere)
As autonomy becomes central to future combat vehicle design, high-fidelity virtual proving grounds are essential for accurate mobility prediction in complex terrains. This research focuses on saturated clay—an extreme off-road environment with low shear strength, high deformability, and complex pore pressure behavior. Complementary experimental and computational approaches are conducted to study trafficability on high water content soils. Experimentally, full-scale drawbar pull tests were performed on a saturated clay bed in a controlled indoor soil bin. A ten-pass sequence was conducted over 2–3 weeks to assess short- and long-term rutting and strength recovery, capturing time-scale effects on drawbar performance. Computationally, a finite element pneumatic tire model was developed and coupled with an effective stress-based clay model. Simulations across slip ratios were validated with experimental results. A sensitivity analysis identified key soil parameters influencing performance. A new mud model is proposed to overcome current limitations, incorporating critical state behavior, strain-rate dependence, and softening to residual strength. Together, these efforts enhance predictive modeling for autonomous mobility in saturated terrains.
Project 3.25
Machine Learning-Augmented Multi-Fidelity Tire-Soil Interaction Model for Autonomous Off-Road Mobility Prediction
Hiroyuki Sugiyama (PI), Takahiro Homma, Du-Chin Liu (U. of Iowa); Paramsothy Jayakumar (GVSC); Xiaobo Yang (Oshkosh Corp.)
A reliable simulation tool capable of predicting off-road mobility on complex granular deformable terrain is essential for reliable vehicle design and performance evaluation. Although a computationally cheaper simple terramechanics (ST) model has been widely utilized for developing and evaluating autonomous navigation algorithms under stochastic terrain conditions, the semi-empirical and quasi-static approximations of the tire-soil contact modeling prevent reliable simulation-based assessments of off-road mobility systems, particularly for evaluating mobility limits. Furthermore, the use of physics-based high-fidelity complex terramechanics (CT) models requires substantial computational resources for applications that necessitate many simulation runs. To tackle these modeling and computational challenges in simulation-based assessment of autonomous mobility systems, this study proposes a new grid-based transient tire-soil contact model by bridging the CT and ST models and leveraging their strengths through a machine learning technique. It is demonstrated using several numerical examples that the transient tire-soil interaction behavior on large deformable granular terrain can be predicted accurately in scenarios not considered in the training data while achieving a substantial computational speedup for use in autonomous mobility studies.
Jump to Session 1.A | Session 1.B | Session 1.C | Session 2.A | Session 2.B | Session 2.C