Technical Talks Abstracts
Jump to Session 1A | Session 1B | Session 1C | Session 2A | Session 2B | Session 2C | Session 3A | Session 3B | Session 3C
Day 1: Wednesday June 10
Session 1A: Perception & Planning
Project 1.39
Adaptive and Efficient Perception for Autonomous Ground Vehicles Operating in Highly Stochastic Environments under Sensing Uncertainties
Luke Chen, Junyao Wang, Praneetsai Vasu Iddamsetty, David Martin, Jonathon Smereka, Pramod Khargonekar, Mohammad Al Faruque
Autonomous vehicles rely on robust and efficient perception systems to operate safely under diverse and challenging conditions. This talk summarizes our recent efforts in adaptive multimodal sensor fusion for AV perception, with a focus on how, when, and why to fuse heterogeneous sensor data such as camera, LiDAR, and radar. We present a series of methodologies and frameworks developed to address these questions: (1) HydraFusion, which dynamically adapts the fusion strategy based on sensing context for optimized energy consumption and lower latency; (2) DivFuse, which explores the role of diversity-aware ensemble fusion in improving robustness; (3) HyperDUM, a prototype-based deterministic uncertainty quantification method leveraging hyperdimensional computing for real-time performance; and (4) CRUISE, which incorporates vision-language models (VLMs) to enhance contextual reasoning and feature reliability under uncertainty. Together, these contributions offer a principled and efficient pathway toward resilient perception in autonomous systems.
Project 1.41
Resilient Trajectory Planning for Extreme Mobility on Challenging Slopes
Tulga Ersal (PI), Bogdan Epureanu (Co-PI), James Baxter (GSRA) (UM); Paramsothy Jayakumar (GVSC); Chenyu Yi (Mercedes-Benz); Andrew Kwas, Timothy Morris (Northrop Grumman)
A novel local trajectory planner, capable of controlling autonomous off-road vehicles on steep, rugged terrain at high-speeds is presented. Autonomous vehicles cannot currently operate in this domain, as existing approaches either cannot produce dynamically-feasible plans that satisfy long-term objectives, do not protect against rollovers induced by rough terrain and suspension dynamics, or are not real-time feasible. We address this challenge by developing a novel approximate-infinite-horizon model predictive control formulation for trajectory planning. Optimal cost-to-go estimates are used to combine multiple dynamics-aware planning stages without reductions to a single global plan. Extreme mobility, including tire liftoff without rollover, is enabled by a suspension-aware dynamics model and energy-based safety constraint. The formulation is analytically shown to approximate an infinite-horizon planner and to predict rollover types ignored by many state-of-the-art methods. The planner’s ability to provide safe, extreme trajectories that satisfy long-term goals is studied through both simulated trials and full-scale physical experiments. The results show that the new planner achieves higher success rates and fewer rollovers than a state-of-the-art baseline across several vertically-challenging scenarios that push the vehicle to its mobility limits.
Project 1.42
Hybrid Learning-Based Policy for High-Speed Autonomous Off-Road Navigation
Ram Vasudevan (PI), Elena Shrestha, Austin Buchan, Lucas Lymburner (UM); Paramsothy Jayakumar, Calvin Cheung (GVSC); Admed Mekky (MathWorks)
Autonomous off-road navigation is challenging due to limited prior knowledge of the environment, complex vehicle-terrain interactions, and the stochastic nature of real-world conditions. Model-based approaches such as model predictive control (MPC) can provide safety guarantees, but they rely on structured models that cannot readily exploit high-dimensional sensors such as cameras and lidar, which are critical for perceiving terrain, surface properties, and obstacles in novel environments. Moreover, accounting for uncertainty and modeling error often makes MPC conservative, reducing performance in challenging scenarios. In contrast, reinforcement learning (RL) can naturally integrate rich sensor inputs and learn scenario-dependent vehicle dynamics and uncertainty directly from data, enabling adaptation to local terrain and operating conditions without uniformly planning for worst-case outcomes. We develop a hybrid policy that combines MPC and RL through a gating network, which learns when to favor MPC for safety and when to trust RL for performance based on the current scenario and the learned policy’s confidence. We demonstrate this approach in simulation and real-world off-road experiments, showing improved performance and generalization across diverse unseen conditions.
Project 1.45 (3-min rapid fire, new project)
Physics Forward Co-Design of Small, Enduring, Multimodal Ground Robots for Reconnaissance
PI: Cameron Aubin (UM)
Abstract TBA
Project 1.46 (3-min rapid fire, new project)
Graph-Enhanced Vision-Language Sensor Fusion for Robust Perception in Data-Scarce and Ambiguous Driving Scenarios
Praneetsai Vasu Iddamsetty, Junyao Wang, Luke Chen, Jonathon Smereka, Pramod Khargonekar, Mohammad Al Faruque
Reliable perception in autonomous driving and robotics requires fusing heterogeneous sensors—cameras, LiDAR, and radar. Yet, current fusion methods break down under edge conditions that define real-world deployment: (1) data scarcity and long-tail events hinder generalization, (2) degraded or incomplete inputs from occlusions, failures, or adverse weather reduce reliability, (3) semantic ambiguity and modality conflicts (e.g., radar noise vs. visual occlusion) challenge interpretation, (4) domain shifts and adversarial environments undermine transferability. Existing approaches, relying on feature concatenation or attention, rarely incorporate structured spatial reasoning or high-level semantic priors. Consequently, they yield brittle, non-interpretable decisions and lack resilience—failing to deliver utility where it matters most. This project develops graph-enhanced, vision-language guided sensor fusion to address these limitations. By modeling spatial dependencies, introducing adaptive semantic modulation, and enabling graceful degradation, we aim to achieve resilient, interpretable perception in data-scarce, ambiguous, and adversarial environments.
Session 1B: Human-Autonomy Interactions
Project 2.19
From Takeover to Teamwork: Adaptive Human-Autonomy Teaming Within and Between Vehicles
PIs: Kira Barton, Chris Vermillion; Quad Members: Scott James, Matthew Castanier; Student Researchers: Aleksandra Dudek, Patrick Linford
A critical human-autonomy interaction exists at both an intra-vehicle level (i.e., between a human operator and (semi-)autonomous vehicle within the vehicle itself) and inter-vehicle/multi-agent network level (i.e., between multiple autonomous vehicles and human soldiers – including collaborators and adversaries). We first investigate how adapting the training protocol for a takeover task online to account for human behavioral and learning variations impacts training outcomes. A case study was conducted to compare a baseline static training protocol with the adaptive protocol between 30 participants, demonstrating the value of the personalized protocol.
We further investigate the ability of a learning framework to support the recovery of victims in an unstructured search and rescue (SAR) task. To address partial observability, inter-team information asymmetry, and intermittent human intervention, we propose a human-on-the-loop POMCP-based framework that jointly reasons over environmental uncertainty, team coordination, and human behavior. The approach models latent human preferences and team strategies while leveraging a dynamic bias map to guide complementary search. Lab experiment results in multi-group SAR scenarios demonstrate improved efficiency and robustness over baseline methods.
Project 2.20
Hierarchical Task Management to Leverage Appropriate Trust and Reliance in Agents with Heterogeneous Behaviors
Kenna Henkel (presenting), Brad Killen, Daniel Carruth (PI), Cindy Bethel (co-PI) (MSU); Victor Paul (GVSC); Mary Quinn (Leidos)
Ensuring effective task performance in teams of heterogeneous autonomous agents requires mechanisms to manage trust and reliance as behaviors vary, degrade, or become anomalous during execution. Without such mechanisms, teams may over-rely on unreliable agents or underutilize capable ones, reducing overall effectiveness in dynamic environments.
This work examines approaches for monitoring trust and influencing reliance to maintain task performance in multi-agent teams. Building on a shared meta-model framework and virtual testbed for collaborative search tasks, the project focuses on simulation-based studies of fully autonomous teams exhibiting diverse and anomalous behaviors.
We implement and compare two strategies. The first is a decentralized, agent-based approach in which individual agents assess teammate reliability based on observed behavior and interaction history. The second introduces a higher-level arbitration agent that actively monitors team performance in real time, identifies anomalous task execution, and participates directly in task delegation to mitigate degraded performance.
Experiments evaluate these approaches across varying team sizes and heterogeneous behaviors. Future work will re-incorporate humans in the loop and expand the range of anomalous behaviors.
Project 2.21
Enhancing UGV Navigation with Adaptive Human Intervention
PIs: Dawn Tilbury, Lionel P. Robert Jr. (UM)
Abstract TBA
Project 2.26
Shared Perspectives for Unforeseen Response in Human-Robot Teams (SPUR)
PIs: Lionel Robert, Dawn Tilbury; Students: Zariq George, Myles Mackie (UM); GVSC: Mark Brudnak, Rachel Anderson, Andrew Hoelscher; Industry: Lilia Moshkina (May Mobility), Samantha Dubrow (MITRE)
Human-robot teams (HRTs) are increasingly deployed in complex, high-stakes environments where unforeseen events can disrupt coordination and degrade performance. Existing approaches, such as teleoperation and contingency planning, often rely on continuous human control or on prior knowledge of potential failures, which limits scalability and adaptability. Shared mental models (SMMs) offer a promising framework for enabling coordinated behavior by aligning understanding of team roles, tasks, and capabilities. However, little is known about how SMMs are formed in HRTs, particularly under dynamic and uncertain conditions.
This work investigates capability representation as a mechanism for supporting the development of shared mental models in HRTs. Capability representations encode what agents can and cannot do, shaping how human teammates interpret and coordinate with robotic partners. We examine how different representation modalities affect SMM formation, team adaptability, and task performance in a simulated search-and-reconnaissance mission involving one human and two unmanned ground vehicles.
We propose a causal framework linking capability representation to performance through shared mental models and team adaptability. Data collection has been completed (N = 76) following an a priori power analysis, and analyses are ongoing to evaluate how representation modalities influence adaptive coordination under unforeseen events.
Session 1C: Materials, Structures and Terramechanics
Project 3.23
Adaptive Structures with Embedded Autonomy for Advancing Ground Vehicles
Dr. Kon-Well Wang (PI, UM), Dr. Matt Castanier (U.S. Army GVSC Member), Dr. Jayanth Kudva (NextGen Aeronautics, Inc.), Dr. Ellen C. Lee (Ford Motor Company), Minh Nguyễn (PhD Student), Dr. Patrick Dorin (Postdoc)
The rapid advances in autonomous systems, such as automated vehicles, have demanded future adaptive structural and material systems to become even more intelligent. Such a need inspired us to advance from the conventional platform that relies mainly on add-on digital computers to achieve intelligence, to mechano-intelligence that embodies intelligence in the mechanical domain. Although interesting studies have attempted, there is a lack of a systematic foundation for constructing and integrating the various elements of mechano-intelligence, namely perception, learning, and decision-making, with sensory input and execution output for engineering functions. In this study, we lay down this foundation by harnessing the physical computation concept, advancing from mere computing to multifunctional mechano-intelligence. As exemplar testbeds, we constructed mechanically intelligent metastructures to achieve adaptation via physical computing, and uncover engineering functions such as adaptive noise and vibration controls. This research will pave the path to autonomous structures that would surpass the state of the art, with lower power consumption, more direct interactions, and much better survivability in harsh environment and under cyberattack.
Project 3.25
Machine Learning-Augmented Multi-Fidelity Tire-Soil Interaction Model for Autonomous Off-Road Mobility Prediction
Hiroyuki Sugiyama (PI), Takahiro Homma, Du-Chin Liu (UI); Paramsothy Jayakumar (GVSC); Xiaobo Yang (Oshkosh Corp.)
A reliable, fast simulation tool for predicting off-road mobility on complex granular deformable terrain is essential for simulation-based assessment of autonomous mobility systems. Although a computationally cheaper simple terramechanics (ST) model has been widely utilized for developing and evaluating autonomous navigation algorithms under stochastic terrain conditions, the semi-empirical and quasi-static approximations in tire-soil contact modeling prevent reliable simulation-based assessments of off-road mobility systems, particularly for evaluating mobility limits. Furthermore, physics-based high-fidelity complex terramechanics (CT) models require substantial computational resources for applications that necessitate many simulation runs. To address these modeling and computational challenges in virtual testing of autonomous mobility systems, this study proposes a new grid-based transient tire-soil contact model that bridges the CT and ST models, leveraging their strengths through a machine learning technique. In particular, the pointwise grid prediction model proposed in Y1 is enhanced with adaptive functions that approximate the time-varying distributions of normal contact stress and soil surface velocity along each circular contact line within a grid contact patch, enabling further computational speedup. The predictive ability and computational benefits of the proposed approach are demonstrated with several numerical examples, along with experimental validation using an MRZR test vehicle.
Project 3.27
Quantum Computing Innovation for Off-Road Mobility
Shravan Veerapaneni, James Stokes, Sam Cochran (UM); Jeremy Mange, Paramsothy Jayakumar, David Gorsich (GVSC); Sarah Mostame (IBM TJ Watson Center)
Despite advances in high-performance computing and algorithms, high-fidelity, physics-based simulation of ground vehicles on off-road terrains remains computationally intensive and costly. With classical transistor scaling nearing its physical limits, sustaining Moore’s Law has become increasingly difficult. Quantum computing offers a promising alternative, with the potential to accelerate hard computational tasks. In this talk, we introduce an algorithm based on Hamiltonian Gradient Descent (HGD) to accelerate continuous optimization problems using continuous-variable quantum computing (CVQC)—a framework that encodes information into quantum modes, or qumodes. CVQC operations must be decomposed into native elementary gates compatible with quantum hardware. We present gate synthesis techniques for implementing HGD objective functions within the CVQC framework. Specifically, we propose an approach that offloads the computational bottleneck in discrete element calculations onto a quantum device, enabling it to be addressed with HGD. We also discuss potential speedups from this method and highlight the challenges posed by current quantum hardware limitations. Finally, we explore possible solutions to overcome these obstacles and make practical quantum acceleration feasible.
Day 1: Wednesday June 10 - Afternoon
Session 2A: Perception & Planning, and Multi-Agent Systems
Project 1.44
NEMOSYS: Neural Memory Organization System - Experience-based Neuromorphic Learning for Decision-making and Autonomous Maneuvering at the Edge
Maryam Parsa (PI), Derek Gobin, Ali Albayati, Shay Snyder (GMU); Jon Smereka (GVSC); Andrew Capodieci (Neya)
Autonomous ground vehicles (AGVs) operating in off-road and contested environments must adapt to dynamic conditions while reasoning over previously encountered situations. However, current learning frameworks rely primarily on reactive policies or unstructured experience replay, limiting their ability to identify and leverage recurring scenarios in stochastic environments. This work presents the Neural Memory Organization System (NEMOSYS), a brain-inspired framework for integrating semantic and episodic memory into autonomous decision-making. NEMOSYS enables agents to learn structured representations of scenarios and their temporal evolution, allowing both individual vehicles and teams to recognize previously encountered situations and adapt their behavior accordingly. The framework leverages vector symbolic architectures (VSAs) to encode multi-modal observations and temporal relationships into compact, distributed representations that support efficient storage, retrieval, and generalization. A key result of this work is demonstrating that VSA-based representations provide a plausible and scalable mechanism for identifying previously seen scenarios through similarity-based retrieval, without requiring explicit replay or centralized memory. This capability supports improved adaptability, reduced training time, and more efficient operation under size, weight, and power constraints. Ongoing work focuses on extending the framework with episodic memory mechanisms to capture temporal dependencies between events, and on deployment and evaluation in real-world AGV platforms.
Project 5.21
Multi-Phase Vector Symbolic Architectures for Distributed and Collective Intelligence in Multi-Agent Autonomous Systems
Maryam Parsa (PI), Shay Snyder (GMU); David Gorsich (GVSC); Andrew Capodieci (Neya)
Autonomous multi-agent systems operating in contested and stochastic environments require robust mechanisms for sharing information, adapting to uncertainty, and making real-time decisions under limited communication and computational resources. However, existing approaches often rely on centralized processing or unstructured high-dimensional data representations, limiting scalability and responsiveness in heterogeneous fleets. This work presents a distributed collective intelligence (DCI) framework that enables shared situational awareness and adaptive decision-making across agents through structured hyperdimensional representations. The approach encodes multi-modal sensing data into compact vector symbolic representations that support distributed memory and computation. We integrate three key components: VSA-OGM for spatially grounded mapping, HyperSpace for compositional representation and manipulation, and SRMU for relevance-aware streaming memory updates under non-stationary conditions. Together, these components enable agents to maintain and update shared representations without centralized coordination, supporting scalable learning and decision-making in dynamic environments. Our results demonstrate improved computational efficiency, robustness to noise and uncertainty, and adaptability to evolving conditions, providing a pathway toward real-time, distributed intelligence in heterogeneous autonomous systems.
Project 1.43
Exploring Detecting and Classifying Effects of Physical and Cyber Attacks on Perception Pipeline for Uncrewed Ground Systems
Daniel Carruth (PI, presenting), Nicholas Harvel, Brad Killen, Cooper Black, Devin Chen, Kyla Mangum (MSU); GVSC quad member: Jon Smereka; Industry quad members: Dave Martin (Neya), Jeremy Falls (MartinFed)
Autonomous ground vehicles increasingly operate in contested and adversarial environments, where deliberate physical and cyber attacks can degrade perception systems and undermine safe operation. These attacks range from direct physical interference with sensors such as occlusion, contamination, abrasion, or directed lighting to cyber actions that manipulate data as it moves through the perception pipeline. Perception outputs directly inform downstream decision‑making and can quickly lead to unsafe behaviors or mission failure if not detected and addressed in real time.
This project investigates methods for detecting, diagnosing, and responding to attacks on autonomous vehicle perception systems. Leveraging laboratory, field, and simulation-based threat data, we examine how attacks manifest as observable effects on sensor data and outputs of the pipeline. A key focus of Year 2 is the integration of a visual language model as part of the perception and diagnostic system. We will discuss its emerging role in near‑real‑time detection, classification of potential threats, identification of active attacks, and recognition of attack effects. Ongoing work targets integration with decision‑support interfaces to enable timely, informed responses by automated systems and human operators.
Project 5.23
A Hierarchical Transformer Approach to Automate Co-Design of Vehicles and their Team Operations
Bogdan Epureanu (PI), Anirudh Kanchi, Soham Purohit (UM); Oleg Sapunkov, Anthony Dolan (GVSC); Arnold Martinez, Weston Murphy (Aberdeen Test Center); Matthew Foglesong (NAMC)
Creating AI-powered teams of autonomous vehicles involves conceptualization, designer input, intelligence empowerment, and performance experimentation. This process becomes especially challenging when developing multi-agent systems that require human-autonomy teaming due to complex interactions between heterogeneous agents and operation environments. Existing approaches either solve teaming strategies given pre-defined physical attributes of vehicles or find attributes that satisfy pre-defined strategies/behaviors. In this research, we devise a more general approach to iteratively and simultaneously co-design physical attributes of the agents in the team and their autonomous behaviors and teaming strategies. We present a hierarchical transformer approach that optimizes the team design at the higher level and the teaming behaviors at the lower level. Furthermore, we highlight the effectiveness of the transformer framework independently on the co-design problem, and the teaming strategy optimization. We demonstrate the co-design method in a multi-agent disaster-relief operation considering vehicle capabilities, stochastic task dynamics and constraints on physical attributes. This research aims to automate the co-design process that cost-effectively leverages available resources and assets for maximum teaming effectiveness, and it also fast adapts to mission demands and to changes in operation environments.
Project 5.26 (3-min rapid fire, new project)
Trajectory Planning with Omni-Experiential Learning for Robust Fleet Mobility on Extreme Off-Road Terrain
Tulga Ersal (PI), Bogdan Epureanu (Co-PI), James Baxter (GSRA), Haoran Ma (GSRA) (UM); Paramsothy Jayakumar (GVSC); Chenyu Yi (Mercedes-Benz); Andrew Kwas, Timothy Morris (Northrop Grumman)
This project focuses on autonomous vehicle fleets on challenging off-road terrain. It aims to enable robust and adaptive off-road mobility through a distributed, peer-based learning framework. The learning framework leverages experience, either of the ego vehicle or of peers, to improve mobility. It acts through a channel separate from existing learning methods and is expected to have a distinctive capability to compensate for a multitude of error-inducing physical phenomena (i.e., enabling adaptation to "unknown unknowns"). Uniquely, it is also intended to consume experience from a variety of sources, with automatic adaptation to the information quality of the source. Sources may include both physically similar and dissimilar vehicles in a heterogeneous fleet, and both high and low fidelity digital twins. Preliminary results demonstrate improved mobility performance via fewer rollovers and decreased collision extents compared to a non-adaptive baseline for a homogeneous vehicle fleet traversing a challenging off-road route.
Project 5.28 (3-min rapid fire, new project)
Provably Correct Fleet Operations and Vehicle SoS Integration with Assume-Guarantee Contracts
Inigo Incer (PI), Yigit Narter (UM); Matt Castanier, Steve Rapp (GVSC); Alessandro Pinto (NASA JPL)
Abstract TBA
Project 5.29 (3-min rapid fire, new project)
AI-Enabled Command and Control via Multi-Agent Conformalized Risk Adaptation
PI: Dimitra Panagou; GSRA: Daniel Cherenson (UM)
AI/ML and other data-driven methods have significantly enhanced the capabilities of autonomous multi-agent teams. However, quantifying the prediction uncertainty of such models remains a critical challenge for safeguarding these systems, particularly under online model updates and distributional shifts. While Conformal Prediction (CP) offers a distribution-free framework for uncertainty quantification, traditional methods often rely on restrictive data assumptions. Recent advances in Distributed Conformal Prediction (DCP) allow agents to quantify uncertainty collectively without sharing raw datasets, yet their application within multi-agent frameworks remains unexplored. Furthermore, the DCP currently relies on restrictive assumptions on the homogeneity of the prediction models among agents. In this project, we bridge this gap by introducing the first application of DCP to heterogeneous multi-robot teams. We propose novel methods for multi-agent data collection and integrate them into distributed optimization tasks, specifically target tracking and cooperative localization. Our approach enables robust, real-time uncertainty quantification, ensuring safer and more reliable coordination in dynamic environments.
Session 2B: Human-Autonomy Interactions
Project 2.22
Trust-Calibrated Meta-Learning for Adaptive Multi-Robot Motion Planning under Temporal Logic Specifications in Human-Robot Collaborative Bounding Overwatch
Shahil Shaik, Anshul Nayak, Yue Wang (PI) (CU); Jonathon Smereka (GVSC)
Bounding overwatch is a critical maneuvering tactic, where one unit provides cover while another advances under enemy engagement. Work on “wingmen” has incorporated environmental factors into trust models for human-robot collaboration but remains limited to forward-only perspectives. To address this, we propose a human-aware multi-robot framework for multi-viewpoint bounding overwatch that integrates human cognition with autonomous decision-making. Drawing from psychology and behavioral economics, human trust modeling, we enhance situational awareness and interaction in dynamic environments using multi-agent reinforcement learning (MARL). Conventional MARL approaches rely on centralized training or global information, which is impractical in communication-constrained off-road scenarios. We introduce Distributed Graph-Attention MAPPO (DG-MAPPO), where agents share local observations over multi-hop graph structures. A Distributed Graph Attention Transformer (D-GAT) module aggregates this information to approximate global state, mitigating non-stationarity and enabling scalable coordination. On the human side, trust is modeled as a dynamic quantity evolving under uncertainty. Human feedback trains a Neural Additive Model (NAM) that identifies key factors influencing trust. This model is integrated into MARL, enabling trust-aware policies that improve collaboration in complex multi-robot missions.
Project 2.24
Multimodal Real-Time Cognitive Load and Emotional State Detection for Human-Machine Integrated Formations
Alvaro Vega-Hidalgo, Jiayi Tang, Sean Rice, Ingrid Wu, Rada Mihalcea (PI), Bogdan Epureanu (co-PI), Mihai Burzo (co-PI) (UM)
As autonomous systems become more common, operators must process increasing amounts of information in real time, which can lead to higher cognitive load, stress, and reduced performance. We will present our work toward a multimodal system for real-time detection of cognitive load in human-machine integrated formations. The system combines physiological signals, facial expressions, linguistic cues, user information, and task context to build predictive models of operator state. We will describe the data collected to date and share initial analyses of the relationships between different modalities and cognitive load.
Project 2.25
Dynamic allocation of information through auditory, visual, and haptic interfaces to minimize
cognitive burden
Andrea Macklem-Zabel (OU), Absalat Getachew (OU), Jennifer Vonk (OU), Wing-Yue Geoffrey Louie (OU), Mark Brudnak (GVSC), Ryan Wood (GVSC), Chris Mikulski (GVSC), Joseph O’Bruba (GVSC), Gerald Jung (GVSC)
The US Army Next Generation Combat Vehicle (NGCV) program is expected to produce vehicles that are closed hatched, with the crew situated inside to reduce risk, and multiple sensors combined with artificial intelligent agents all vying for the attention of soldiers. This brings new challenges in situational awareness, communication, workload, and task allocation for crews operating NGCVs. Extended reality (XR) visual, auditory, and haptic technologies have the potential to address these challenges by enabling the careful curation of information presented to soldiers according to their individual needs. However, there is a current research gap on how to best present as well as allocate information across human sensory channels using XR interfaces to maximize task performance and, in the future, to support overall team performance via dynamic task allocation. In this technical talk, we present preliminary results from a study designed to investigate the effect of information type to sensory channel mappings on task accuracy and latencies. These results will inform future design recommendations for how to best map information to sensory channels to maximize user performance.
Project 2.A127
A3GENT: Engineering Adaptive Adversary Digital Twins for Autonomous Vehicle Validation
Grace Bochenek (PI), Bulent Soykan (Co-PI), Ghaith Rabadi (Co-PI); Victor Paul (GVSC)
As autonomous systems like Virtual Crew Members (VCMs) and Digital Advisors (DAs) integrate into ground vehicles, their robustness relies on rigorous validation. Current simulations rely on predictable, scripted adversaries, risking the development of brittle AI that fails against dynamic, real-world threats. The A3GENT project addresses this gap by developing Adaptive Adversary Digital Twins (ADTs). These intelligent "Red Force" agents leverage Reinforcement Learning (RL), Deep Learning, and Generative AI to continuously adapt and challenge VCMs in real-time. Our research spans three core tasks: (1) developing adaptive AI algorithms for realistic adversarial maneuvering via multi-objective reward shaping; (2) creating a real-time data integration and procedural scenario generation pipeline within a decoupled Python-to-Unreal Engine architecture; and (3) scaling to multi-agent, game-theoretic strategic coordination. This presentation highlights our initial successes in emergent tactical behaviors (e.g., kiting, trigger discipline) and our interface bridging Python AI with ProjectGL. A3GENT ultimately provides the Army with an automated "Digital Crucible" to validate autonomous combat vehicles prior to physical fielding.
Project 2.27 (3-min rapid fire, new project)
Acoustic Hologram Integrated with Augmented and Virtual Realities for Communication
Chengzhi Shi (PI), Kon-Well Wang (co-PI), Devavrath Raghunat (UM); Matthew Castanier, Mark Brudnak, Christopher Mikulski (GVSC); Xing Xing (General Motors Company)
We aim to develop dynamic 3D acoustic haptic fields that provide continuous volumetric touch essential for advanced human-machine communication for Army applications. Current AR/VR technologies focus on visual and motion experiences, but lack immersive, wireless, real-time 3D haptics. Existing mid-air ultrasound haptic devices provide only localized vibrotactile feedback.
To address this gap, we will utilize a digitally controlled ultrasound array for acoustic hologram synthesis. Our technology will emit real-time 3D radiation force and modulated pressure fields to simulate realistic scenarios, supporting ARC’s mission for applications such as digital engineering design reviews, in-vehicle blast cueing, and soldier training. The proposed hologram will be contactless, highly reconfigurable, and real-time in 3D, achieving full-body haptics.
We will use continuum mechanics and computational sensory modeling to link acoustic fields to human perception. Furthermore, we will develop deep-learning-based algorithms to generate 3D haptic fields with high temporal and spatial resolution. Finally, we will build an experimental setup for subjective psychophysical validation. This technology will integrate realistic touch into AR/VR environments to maximize soldier readiness and driver’s environmental awareness, aligning with GVSC and automotive industry developments.
Project 5.27 (3-min rapid fire, new project)
Adaptive Vision-Language Model (VLM) and Vision-Language-Action Model (VLA) Enhanced Off-road Autonomy for Heterogeneous Multi-Agent Systems
Shahil Shaik (Clemson), Aditya Parameshwaran (Clemson), Anshul Nayak (Clemson), Yue Wang (Clemson), Jonathon Smereka (GVSC)
Multi-agent reinforcement learning (MARL) offers a principled approach for enabling coordination in heterogeneous multi-robot systems. However, its practical deployment in off-road environments is limited by poor sample efficiency, weak generalization, and reliance on centralized critics learned from sparse data. At the same time, vision-language models (VLMs) and vision-language-action models (VLAs) demonstrate strong multimodal reasoning and zero-shot capabilities, yet their application to multi-agent autonomy remains largely unexplored. This project proposes a unified VLM/VLA-enhanced MARL framework for adaptive off-road autonomy in heterogeneous multi-robot systems. The approach integrates three key components: (1) a large-scale multi-modal dataset capturing diverse collaborative off-road scenarios, (2) Multi-Agent Vision-Language-Critic Models (MA-VLCMs) that leverage pretrained foundation models to provide generalized, context-aware value estimation for improved sample efficiency, and (3) distributed VLA policies that enable scalable, adaptive coordination under communication and computational constraints. By jointly reasoning over visual observations, structured inter-agent representations, and natural language task specifications, the framework bridges perception, reasoning, and control. Experimental validation in simulation and real-world settings will be conducted to demonstrate the proposed framework. This work establishes a scalable pathway toward perception-driven, language-informed, and value-grounded autonomy for real-world multi-robot systems.
Session 2C: Materials and Structures
Project 3.A116
Improving the Cyclability of Li-S Batteries Through Fe-Catalyzed Polysulfide Conversion
Jyoti Pandey, Aliakbar Yazdani, Mukesh Singh, Isaac N. Boakye, Kyle L. Kilbarger, Carlos Chavez, Zachary Jasper, Benjamin R. Seltin, Adeel Zafar, Valeri Petkov, Veronica Barone, Mark Wolfman, Bradley D. Fahlman (PI); Chi-Hao Chang (The Dow chemical Company); Yi Ding (GVSC)
Although much work has been expended on the development of metal-sulfur batteries, especially Li-S, the insulating behavior of sulfur and polysulfides, as well as poor reversibility and slow kinetics of the metal-S conversion reactions, have limited their commercial applications. In this presentation, we will describe the synthesis, characterization and electrochemical testing of Fe-Nx single-atom catalysts on urea-derived graphitic carbon nitride (g-C3N4), demonstrating that thermal reduction of the support prior to Fe incorporation effectively tunes the Fe-Nx coordination environment to enhance polysulfide shuttle suppression and sulfur redox kinetics. Thermally reduced Fe-loaded cathodes show markedly greater enrichment of reduced sulfur species and stronger suppression of oxidized intermediates than pristine-support Fe catalysts and bare supports, demonstrating superior polysulfide conversion.
These catalysts deliver the highest electrochemical performance reported to date: an initial capacity of 693.5 mAh g-1 at C/10, retaining ~490 mAh g-1 after 200 cycles (~71% retention) and 448.7 mAh g-1 at 1 C, outperforming Fe catalysts on the pristine support (\\\\\\\\\\\\\~66% retention) and the bare thermally reduced support (\\\\\\\\\\\\\~48% retention). This work establishes thermal reduction of g-C3N4 as an effective support-engineering strategy for single-atom catalyst design in Li-S batteries.
Project 3.A117
High Temperature PEM Fuel Cells
Anja Mueller (PI). Axel Mellinger, Leela Rakesh, Milah Curry, Gavin Mehl, Twaha Alim, Phoenix Knipe, Sujith Ganta, Sithira Samaranayake (CMU); Kevin Centeck, Ted Burye, Talia Marie Sebastian (GVSC); Gary K. Ong (Celadyne Technologies)
High-temperature, low-humidity operation remains a key limitation for proton exchange membranes in hydrogen fuel cells, where conventional materials exhibit reduced conductivity and stability. This work addresses that constraint through the development of a fluorinated, branched polymer system incorporating imidazole functional groups to enable proton transport under dry conditions. Molecular dynamics simulations using the ReaxFF force field in LAMMPS are used to predict optimal polymer architectures, including branching characteristics and functional group distribution. These predictions inform the synthesis of high–molecular weight polymers with controlled structure, which are subsequently processed into dense membranes designed to promote continuous proton transport pathways. The presentation will report on the current state of the modeling framework alongside optimized synthesis and membrane casting conditions. Structural, thermal, mechanical and electrical properties of linear, branched, unsubstituted, and substituted systems will be evaluated. Proton conductivity measurements will be correlated with polymer architecture, and initial evidence of channel formation will be discussed.
Project 3.A118
Structural Integrity Assessment of Army Ground Vehicle Structures for Predictive Maintenance
Boyoung Kim and Chanseok Jeong (PI) (CMU)
A robust nondestructive testing (NDT) tool is needed to inspect ground vehicles (GVs), ensuring structural integrity and combat readiness. Such a tool must reliably detect embedded damages—such as delamination in composites or cracks in critical components—that may compromise functionality. Ultrasonic guided wave NDT shows strong potential for imaging structures, enabling faster failure analysis and decision-making while reducing downtime and maintenance costs.
We present a computational approach for identifying void defects within structures by interpreting guided wave measurement data. Training datasets are generated from wave propagation simulations with randomly generated defects. Each sample consists of wave responses recorded by sensors and damage maps on a fixed background grid, which is unaltered during the data generation process. An Artificial Neural Network (ANN) is then trained to classify the state of each element in the grid based on wave signals. The ANN reconstructs defects without prior knowledge of their location, shape, or quantity. The effects of experimental factors—sensor placement, number of measurement locations, noise, measurement degrees of freedom, frequency, and material uncertainties—are evaluated for their impact on detection accuracy.
Project 3.A119
Development and Application of Friction-Free Bending for Assessment of Material Properties under Complex Loading Conditions
Alexandra Glover (PI), Shane Anderson, John Rosenberger, Jacob Longstreth (MTU); Jake Hawkins, Allen Shirley (Corvid Technologies); Katherine Sebeck (GVSC)
This work addresses the need for improved materials and manufacturing technologies to enable next-generation autonomous ground vehicle hull structures. Conventional fabrication relies on multi-pass manual welding of thick armor plate, resulting in high cost, long production times, and increased defect risk - characteristics incompatible with future attritable, autonomous systems. Deformation-based manufacturing methods offer a promising alternative; however, their implementation requires a quantitative understanding of material workability under complex loading conditions. To enable the adoption of this technology, this research program aims to develop a methodology to assess the ductility of candidate materials using a custom designed three-point-bend test, based upon the VDA 238-100 standard, with integrated digital image correlation (DIC). This approach enables full-field, in situ measurement of strain, displacement, and strain rate during bending. By correlating bend-derived metrics with conventional tensile properties, this work aims to establish a robust framework for evaluating processing–structure– property–performance relationships. The resulting methodology is expected to support rapid material screening and inform the design of manufacturable, high-performance vehicle structures.
Project 3.28 (3-min rapid fire, new project)
Advancing Perception and Threat Identification of Autonomous Vehicles via Reconfigurable Phononic Structures
Kon-Well Wang (PI), Bogdan-Ioan Popa (Co-PI), Hao-Yun (Dennis) Hung (Ph.D. Student) (UM); Matthew Castanier (GVSC); Taehwa Lee, Jayanth Kudva (Industry Quad Members)
The growing use of drones in modern conflicts and the staggering losses they have caused has driven an urgent need for autonomous vehicles to detect and locate aerial threats early. Conventional sensing approaches such as radar are often ineffective against small, low-flying targets. While acoustic sensing offers a promising alternative, existing systems rely on bulky components that are difficult to integrate into mobile platforms. To address these limitations, we propose a new approach to acoustic sensing using reconfigurable phononic metastructures.
The central idea is to engineer conformal, lightweight lattices whose geometry can be dynamically reconfigured to manipulate Dirac cones in their acoustic band structure, enabling tunable and directional beamforming in air. This can be realized by combining origami-inspired structures with metamaterial inclusions, allowing broad-range, real-time scanning and sensing. Physics-based modeling, digital twins, and machine learning techniques will be integrated to systematically design and optimize these structures. In parallel, an automated neural-network-based classifier will be developed to identify drones from signals captured by the metastructure. Together, these advances will establish a new class of acoustic sensing system for next-generation autonomous platforms.
Project 4.38 (3-min rapid fire, new project)
Suppression of junction temperature fluctuation in power semiconductors using phase
change material
PI: Solomon Adera; Student: Fabrizzio Vega (UM)
Gallium nitride (GaN) power semiconductors are becoming increasingly widespread, replacing traditional silicon semiconductors for their faster switching speeds and higher power densities. However, as power densities increase, thermal limitations become a critical bottleneck to their performance. This is especially true during burst-power events, such as rapid acceleration in electric vehicles, where the heat generation exhibits large intermittencies that cause sharp fluctuations in the junction temperature. Current cooling solutions are not well suited for transient cooling as they have poor thermal energy storage capacity. Here we present a strategy for suppressing junction temperature fluctuation by utilizing the buffering capacity afforded by phase change material (PCM). In our device design, a thermal buffer is created by embedding PCM in a heat spreader that is attached directly above a GaN-based power transistor. The power transistor operates in a diode configuration to generate current controlled thermal loads. During experiment, the junction temperature is measured using gate resistance thermometry that features temperature-sensitive electrical parameters (TSEPs) at weak, medium, and strong inversions. Preliminary results show that the presence of PCM in the heat transfer pathway suppresses junction temperature fluctuation. The outcomes of this study show the thermal buffering potential of PCM for transient thermal management of GaN-based power electronics.
Day 2: Thursday June 11
Session 3A: Digital Engineering and Multi-Agent Systems
Project 5.22
Unsupervised Testing and Verification for Software Systems of Ground Autonomous Vehicles
Sean Hickey, Nickolas Vlahopoulos (PI) (UM); David Gorsich, Jonathon Smereka, Joseph Madak, Paul Bounker (GVSC); Jae Song (DCS Corporation)
The verification and validation of autonomous ground vehicle software represent a significant portion of lifecycle cost, driven by system complexity and operation in unstructured environments. Conventional testing strategies rely on scripted scenarios and physical trials that may not uncover rare yet critical failures that require precise combinations of inputs to emerge. This work presents a methodology for unsupervised testing of the Robotics Technology Kernel (RTK) autonomy stack within the ProjectGL simulator. A genetic algorithm jointly explores the scenario and fault injection spaces, emulating real-world failures and adversarial attacks. The algorithm prioritizes maximizing fault severity, maximizing test case diversity, and maintaining an even test outcome distribution. The result is a replicable test set that characterizes each failure mode through the variety of test cases that trigger it. The effectiveness of the approach is demonstrated in a Palletized Load System (PLS) leader-follower platoon, in which the leader follows a prescribed path and the follower runs RTK. By revealing these failure conditions in simulation, the methodology enables faster iteration on the autonomy stack's robustness, reducing the cost of field validation.
Project 5.24
LLM-Enabled Operation Management of Multi-Agent Systems
Bogdan Epureanu (PI), Soham Purohit, Jaewon Lee (UM); David Gorsich, Phil Frederik, Jon Smereka (GVSC); Chenyu Yi (Mercedes-Benz)
Efficient command and control of multi-agent systems in complex, off-road environments remains a significant bottleneck for modern ground operations due to the high cognitive load placed on human operators. This research introduces a novel framework leveraging Large Language Models (LLMs) to streamline the mission planning pipeline for heterogeneous vehicle fleets in complex domains, such as forested and unstructured terrain. By interpreting high-level natural language commands, the framework identifies operational intent and translates it into a list of actionable targets. To ensure tactical reliability, the system integrates a closed-loop feedback mechanism where the LLM’s output is evaluated by motion planning and task allocation modules within a low-fidelity environmental simulator. This "simulation-in-the-loop" approach grounds the LLM’s generative capabilities, iteratively refining mission plans until they are validated against real-world environmental constraints. This architecture provides two critical advantages: final plans are guaranteed to be feasible and grounded in physical constraints, and the synergy of LLM pre-trained knowledge with iterative exploration yields near-optimal mission profiles. By generating executable, high-fidelity plans in under five minutes, this framework enables a single commander to manage complex fleets with minimal cognitive overhead, directly enhancing the tactical agility required for the U.S. Army’s future ground maneuvers.
Project 5.25
Enhancing Military Digital Twins: Leveraging Dynamic Data-Driven Application Systems for Complex Operational Scenarios
PIs: Sara Masoud, (WSU), Neda Masoud (UM); Students: Elnaz Alinezhad (WSU), Jason Lu (UM); GVSC: Stephen Rapp; Industry: Jon Rimanelli (Airspace Experience Technologies, Inc.)
Digital twins are increasingly important for supporting human–machine teaming in complex military operations, as they enable the integration of real-time data, continuous scenario updating, and predictive decision support. However, the growing use of autonomous systems, advanced sensing technologies, and distributed operational assets has outpaced the capabilities of traditional data integration and decision-making frameworks. In high-stakes environments such as contested logistics, these limitations reduce situational awareness, hinder adaptive resource allocation, and constrain mission responsiveness. To address these challenges, this project proposes a Dynamic Data-Driven Application Systems (DDDAS) framework that continuously couples live operational data with digital twin models to improve their fidelity, responsiveness, and decision relevance. A central component of the framework is a fidelity optimization mechanism that dynamically determines how sensor streams, model updates, and computational resources should be allocated to maintain accurate and timely twin representations under uncertain and changing mission conditions. In parallel, the project develops dynamic stochastic optimization models to support real-time mission planning and human–machine coordination, enabling adaptive task allocation, resource prioritization, and strategy adjustment as new information becomes available. By embedding optimization directly within the DDDAS loop, the proposed approach transforms the digital twin from a passive monitoring tool into an active decision-support system that can guide operational choices in real time. The resulting framework is expected to advance autonomous monitoring and control in contested logistics missions while establishing a scalable foundation for future digital twin systems that integrate predictive analytics, optimization, and human-in-the-loop decision making in contested and data-rich environments.
Project 5.A126
Autonomous Multi-UAV Reconnaissance in a Complex Environment
Carlo Pinciroli (PI), Davis Catherman (PhD candidate), Antonio Lopez (PhD candidate) (WPI)
We report our progress on autonomous multi-UAV reconnaissance for Movement-to-Contact operations, where swarms of fallible UAVs must complete complex missions in harsh, communication-degraded environments. The project pursues two research thrusts: task allocation under robot failures, and data management in sparse, occasionally-connected swarms.
On task allocation, we present FORMICA (Field-Oriented Regret-Minimizing Implicit Coordination Algorithm), a decision-focused learning framework that achieves high-quality task allocation without any robot-to-robot communication: a stronger result than originally proposed, which assumed unreliable rather than absent communication. FORMICA trains bid predictors end-to-end to minimize Task Allocation Regret, enabling robots to coordinate implicitly by modeling teammates' behavior. Experiments demonstrate a 17% improvement over an analytical baseline at training scale, with strong generalization to swarms 16× larger.
On data management, we present a micro-macro model for Encounter-Driven Information Diffusion (EDID), where robots exchange data only upon physical encounter. Derived from kinetic gas theory, the model identifies two diffusion regimes (logistic and Gompertz) blended by a parameter reflecting communication density, validated in physics-accurate simulation. This lays the foundation for the storage-and-routing algorithms the proposal targets, which remain ongoing work.
Session 3B: Perception & Planning
Project 1.A122
UGV-guided Real-time Path Planning for a Vehicle Platoon in Rough Terrains
Constantinos Chamzas (PI), Jing Xiao (Co-PI), Moradi Lee (Co-PI), Khasif Khursid Noori (MS Student), Jaskrit Singh (MS Student), Abhiroop Ajith (Ph.D. Student)
Planning mission routes for platoons of soldier-operated vehicles in rough, uncertain terrain requires paths that are safe, efficient, and traversable despite incomplete prior knowledge of terrain and threats. To address this challenge, this project pursued three tightly connected objectives: generating platoon paths that jointly satisfy spatiotemporal mission goals and kinodynamic feasibility constraints, planning and executing UGV reconnaissance to validate candidate routes in partially known environments, and updating traversability estimates from heterogeneous sensing across different vehicle platforms. In support of the first objective, we developed a multi-layer motion planning framework that computes kinodynamically feasible trajectories under spatio-temporal constraints by combining candidate region sequencing, geometric lead paths, and asymptotically optimal planning. In support of the second objective, we developed a reconnaissance planner that enables a UGV to gather information needed to validate proposed platoon paths. In support of the third objective, we developed a directional global traversability mapping framework that learns heading-dependent, vehicle-conditioned costmaps from RGB-D observations using self-supervised multi-task learning across heterogeneous vehicles. Together, these results realize a complete framework for reconnaissance-informed platoon mobility in unknown terrain.
We further investigate the ability of a learning framework to support the recovery of victims in an unstructured search and rescue (SAR) task. To address partial observability, inter-team information asymmetry, and intermittent human intervention, we propose a human-on-the-loop POMCP-based framework that jointly reasons over environmental uncertainty, team coordination, and human behavior. The approach models latent human preferences and team strategies while leveraging a dynamic bias map to guide complementary search. Lab experiment results in multi-group SAR scenarios demonstrate improved efficiency and robustness over baseline methods.
Project 1.A123
Model-free Tracking Control Design for a Class of Nonaffine Nonlinear Systems
Masood Ghasemi (PI), Lee Moradi (Co-PI), Shila Alizadehghobadi (WPI)
Autonomous operations of ground vehicles in harsh, unstructured, and dynamic environment face great challenges to ensure persistent agile mobility and maneuverability. A major issue is associated with the limiting assumptions on the vehicle system and its interaction with the environment. Thus, the issue should be rectified by employing a less model-dependent or totally model-free control approach. This work investigates a tracking control design for a class of nonaffine nonlinear systems. Specifically, it is based on the dynamic surface control approach and uses filters to decouple the system into a form of a quasi-independent cascade structure. The approach is fully model-free, uses full-state or partial-state information of the system, and can address matched and mismatched uncertainties and disturbances. Furthermore, a special transient control design are provided to ensure its boundedness and to prevent the control saturation. Finally, some preliminary simulations are provided to show the efficacy of the method.
Project A.124
Modeling and Simulation Fundamentals of Agile Vehicle Maneuvers in Hyper-Dynamic Environments
PIs: Vladimir Vantsevich, Lee Moradi, Parth Patel; Student: Lorenzo Hess (WPI); GVSC: David Gorsich, Philip Frederick; Industry: Team O'Neil
Military vehicle maneuverability was termed as a vehicle operational property that defines the capability of the vehicle to navigate terrain and surroundings and to carry out (i) maneuvers, and (ii) administrative and tactical movements; while optimizing the maneuver and movement time by managing (i) vehicle turnability, (ii) stability, and (iii) handling, for the cost of mobility and energy efficiency if needed. As follows from the above-given definitions, maneuvers and maneuverability of next-generation vehicles in hyper-dynamic environments should be hyper-agile, i.e., the vehicles’ response to dynamic changes in the environment should be extremely fast, precise, and, when possible, preemptive. The goal of this project is to establish agile situational movements and agile maneuvers and develop analytical fundamentals for military vehicle maneuverability to support modeling and simulation of the agile maneuvers in hyperdynamic environments with severe terrains. The project focuses primarily on UGV applications, with the understanding that robotic vehicles can execute agile offensive and defensive maneuvers with no or minimal damage to themselves, compared to human-controlled vehicles.
Project 1.A125
Verified Planning for Uncrewed Ground Vehicles in Dynamic Environments
Kevin Leahy (PI), Vladimir Vantsevich, Lee Moradi (co-PIs), Taylor Bergeron, Rohan Walia, Philip Smith (WPI); Philip Frederick, Jon Smereka (GVSC); Alyssa Scheske (Applied Intuition)
Signal Temporal Logic (STL) shows promise for autonomous robot mission planning, including encoding rules for autonomous driving. However, applying STL to military Unmanned Ground Vehicles (UGVs) presents two key challenges: UGVs operate on unpaved, variable terrain requiring traversability constraints, and the standard STL planning approach—mixed-integer linear programming (MILP)—is NP-hard and too slow for operational replanning.
This project aims to develop a rapid planning and replanning method for autonomous vehicles operating in dynamic, unstructured environments, linking high-level mission objectives with low-level trajectory constraints. Two core research questions guide this work:
RQ1: Can an autonomous agent make verifiable planning decisions at operational speeds? Existing optimization approaches demand computational resources beyond the size, weight, and power (SWaP) constraints of deployed UGVs, necessitating a new, hardware-efficient solution method.
RQ2: Can the decision process incorporate dynamic environmental and mission updates? Unlike symbolic methods that allow incremental graph updates, optimization-based approaches typically require complete re-solving when conditions change, making an incrementally updatable solution method essential.
Session 3C: Terramechanics
Project 3.A120
Simple Terramechanics-based Tire and Soil Separation for Tire Behavior Characterization
Vladimir Vantsevich (PI), Lee Moradi (co-PI), Jesse Paldan (WPI); David Gorsich, Amandeep Singh, Jake Brendle (GVSC); Michael McCullough (BAE Systems)
Semi-empirical terramechanics has many advantages that contribute to its widespread and continued use. Semi-empirical approaches are simple for modeling, making them applicable for real-time and faster than real-time simulation and control. However, concerns still exist with estimating contributions of tire and soil to wheel mobility and energy efficiency as existing methods do not emphasize separate contributions of the tire and soil to the sinkage, tire, deflection and slippage. The research objectives for this project target the tech novelty to separate the tire and soil impacts on tire-terrain interaction in the longitudinal and normal directions. A method was derived to mathematically split tire and soil contributions to normal deflection using a semi-empirical terramechanics-based model characterizing the response to the ground pressure with separate inputs for the tire and soil characteristics. The longitudinal traction-slippage response of the tire is modeled as an exponential curve split into tire and soil contributions to slippage. The models can be used to approximate longitudinal tire slippage and normal deflection models to estimate and compare the separate contributions of the tire and soil towards the tire-terrain interaction.
Project 3.A121
Analytical Fundamentals of Digital Image Correlation for Characterization of Agile Tire Dynamics
Daniel Ruiz-Cadalso, Mayank Arora, Stella Burfeind, Tanmay Shinde, Hiro Smith, Barbara Karkanias, Jesse Paldan, Cosme Furlong (PI), Vladimir Vantsevich (co-PI), Lee Moradi (co-PI) (WPI); Jordan Ewing, Nehemiah Mork, Graham Fiorani (GVSC); Gene Lukianov (VRAD Engineering LLC); Douglas Milliken (Milliken Research Associates, Inc.); Tim Schmidt (Trilion Quality Systems, LLC); Jonathan Darab (GCAPS)
The development of automotive tires is undergoing rapid transformations driven by increasing demand for enhanced performance, safety, and efficiency of autonomous vehicles. Agile tire dynamics, such as slippage and relaxation, are vital performance metrics and testing needs to be performed using specialized machinery with highly controlled operational conditions and loads. Current experimental characterization methods often lack the spatial or temporal resolution required to capture high-speed transient phenomena in full field, which is critical for optimizing real-time traction control and terrain mobility. This research proposes an integrated framework combining numerical models with novel 3D Digital Image Correlation (3D-DIC) and complementary optical techniques. Metrological systems are designed and optimized to quantify tire deformations during transient and steady-state conditions. A testing protocol was designed and executed to characterize the tires’ mechanical properties as they respond to static, quasi-static, and dynamic loads. Computational models of hyper-elastic, nonlinear material behavior are being developed using experimental data. Future work will focus on modeling with new experimental data on longitudinal relaxation dynamics, advancing the fundamental understanding of tire-terrain behavior to improve agile control of next-generation vehicle systems.
Jump to Session 1A | Session 1B | Session 1C | Session 2A | Session 2B | Session 2C | Session 3A | Session 3B | Session 3C