Skip to main content
arc logo
Automotive Research Center
hero image

2021 ARC Research Seminar - Fall Series

October 1, Friday, 9:30-10:30am eastern time

Processing Image Data from Unstructured Environments

PI: Dr. Nick Vlahopoulos, Professor of Naval Architecture and Marine Engineering, University of Michigan
Link to project

The US Army Ground Vehicle Systems Center (GVSC) captures a large amount of data from ground vehicle systems during development and experimentation in both manned and autonomous operations. Currently, there is a lack of tools for processing unlabeled data in a semantic manner. This ARC research is developing two new capabilities for increasing low-shot classification accuracy, and for unsupervised soft labeling (i.e., clustering in groups with similar statistical characteristics, but without knowing ahead of time how many such groups exist) images and video frames that are collected but not currently labeled. This is done while integrating a robust unsupervised feature extractor, which is trained, using unlabeled images, collected from Army battlefield-like experiments. The low-shot classification is of interest in reconnaissance operations of autonomous Army vehicles. The autonomous vehicle is expected to collect information about specific targeted objects (relevant classes) and ignore the presence of any other unrelated object (irrelevant classes). The clustering capability allows for the cross-correlation of the image features with other relevant data to identify significant events and plan for the appropriate action through the control algorithms embedded in the vehicle. Various active GVSC robotics projects will benefit, such as the Autonomous Mobility thru Intelligent Collaboration (AMIC) program and Combat Vehicle Robotics (CoVeR) program. The completed research and the planned effort will be presented. The planning includes accessing the DDR (Data Director) through the D12E.net website in order to enable our software to "talk" to the ones that the Army has already available, thus bringing the information we produce to the Army ecosystem.

A Robust Semantic-aware Perception System using Proprioception, Geometry, and Semantics in Unstructured and Unknown Environments

PIs: Dr. Maani Ghaffari, Assistant Professor of Naval Architecture & Marine Engineering, University of Michigan and Dr. Kira Barton Associate Professor of Mechanical Engineering, University of Michigan
Link to project

In this talk, I will present a dynamic semantic mapping framework that incorporates 3D scene flow measurements into a closed-form Bayesian inference model. The existence of dynamic objects in the environment causes artifacts and traces in current mapping algorithms, leading to an inconsistent map posterior. I will discuss how we leverage state-of-the-art semantic segmentation and 3D flow estimation using deep learning to provide measurements for map inference. We develop a continuous (i.e., can be queried at arbitrary resolution) Bayesian model that propagates the scene with flow measurements and infers a 3D semantic occupancy map with better performance than its static counterpart. I will also present some experimental results using publicly available data sets and discuss opportunities in this area for future work.

October 29, Friday, 9:30-10:30am eastern time

Ultrasound-based Perception with Convolutional Neural Networks

PI: Drs. Bogdan Popa and Bogdan Epureanu, Mechanical Engineering, University of Michigan
Link to project

Echolocating animals such as bats and dolphins demonstrate that ultrasound can be used effectively to classify and locate objects in complex environments in scenarios where optics-based imaging is ineffective, i.e. in fog, rain, snow. But how to replicate the performance of biosonar in artificial autonomous navigation systems is still an open question. In this talk I will show that convolutional neural networks are excellent at classifying and locating objects from single point echo measurements without time-consuming scanning of the object surface with very narrow beams. A full-wave 3D simulation framework will be presented which computes fast and accurately the echoes from distant objects produced by arbitrary impinging ultrasound beams. Simulations obtained with this framework show that the time domain echoes generated by different objects have rich structure. We show that CNNs, algorithms that are excellent at pattern recognition tasks, can efficiently find patterns in the echoes and use these patterns to map the echoes to the object identity and location. The robustness of these algorithms to noise will also be quantified.

Elicitation, Computational Representation, and Analysis of Mission and System Requirements

Rahul Rai, Chandan Kumar Sahu, Vinayak Khade, Mohan Surya Raja Elapolu, Nafiseh Masoudi, Guo Freeman, Georges Fadel, Margaret Wiecek, and Cameron Turner, Clemson University
Denise Rizzo, Jonathan Smereka, Matt Castanier, and David J. Gorsich, US Army Ground Vehicle Systems Center
Link to project

Strategies for evaluating the impact of mission requirements on the design of mission‐specific vehicles are needed to enable project managers to assess potential benefits and associated costs of changes in requirements. Top-level requirements that cause significant cascaded difficulties on lower‐level requirements should be identified and presented to decision-makers. This project aims to introduce formal methods and computational tools to enable the analysis and allocation of mission requirements and associated key performance indicators (KPI). The presentation will outline two complementary interrelated research thrusts that are being pursued to achieve the discussed objectives: (1) representing the technical requirements computationally using natural language processing (NLP) and identifying inter-relationships between technical requirements using graph-based algorithms, and (2) deploying gamification and serious game platforms to carry out requirement engineering tasks.

November 5, Friday, 9:30-11:00am eastern time

In-the-wild Question Answering: Toward Natural Human-Autonomy Interaction

PI: Dr. Rada Mihalcea and Dr Mihai Burzo, University of Michigan
Link to project

Situational awareness remains a major goal for both humans and machines as they interact with complex and dynamic environments. Awareness to unfolding situations allows for rapid reactions to events that take place in the surrounding environment, and enables more informed and thus better decision making. In this project, we address the task of in-the-wild multimodal question answering, in which an autonomous system visually charts a territory, and is able to answer questions about the entities, things, and events it has observed, such as “What objects do you see?,” “How many people are there?,” or “Do they wear uniforms?” We will provide an update on our research work to date, including an overview of our data collection process and a description of the initial algorithms we developed to help us achieve a better understanding of in-the-wild visual scenes.

Integrated Transient Control and Thermal Management of Autonomous Off-Road Vehicle Propulsion Systems

Quad members: Drs. Robert Prucka (PI), Chris Edrington, Qilun Zhu, Gokhan Ozkan, Clemson University; Dr. Vamshi Korivi, U.S. Army GVSC
Link to project

Powertrains for autonomous off-road vehicles need to produce extremely high-power outputs for short periods to meet propulsion and on-board electrical system demands. High instantaneous power requirements create significant challenges for coordination of energy resources, especially in a cooling- constrained environment. This research focuses on real-time optimal control strategies that account for individual component and system response, ensuring efficient transient torque and electrical power delivery within thermal constraints of the powertrain and associated power electronics converters. The control methodologies being investigated take advantage of forward-looking information, when available from autonomous sensing systems, to better optimize powertrain efficiency, cooling, and electrical energy delivery. This talk will cover recent progress in system modeling and optimization while providing insight into the real-time control structure under development.

Learning Enabled Mission Adaptation for a Hybrid Opposed Piston Engine

PI: Dr. Jason Siegel, Assistant Research Scientist, Mechanical Engineering, University of Michigan
Link to project

Electric power is a key enabler for advanced combat systems with diverse mission objectives such as low power for stationary chargers, high output for maximum mobility for an autonomous vehicle, or high torque for towing. The single cylinder hybrid opposed piston engine is used as a testbed for self-calibration, optimization, and diagnosis of an advanced fuel-efficient power source. The research goals include demonstrating the capability of identifying the ideal operational engine setpoints without explicit manual input or scheduled operating map. The presentation will describe the recent development of an iterative, online trajectory optimization algorithm for the intra-cycle control of the crankshaft motion in the opposed piston engine. We show experimental results validating the algorithm at various speed and load setpoints and quantify the efficiency improvements compared to trajectories obtained through offline optimization. Finally, we will describe the direction of future work to enable the system level control of the hybrid architecture.

December 3, Friday, 9:30-11:00am eastern time

Deep Reinforcement Learning Approach to CPS Vehicle Re-envisioning

Ajinkya Joglekar, Ph.D. Candidate, Automotive Engineering; Alexander Krolicki, Ph.D. Candidate, Mechanical Engineering; Dr. Venkat Krovi (PI), Michelin Endowed Chair Professor of Vehicle Automation, Clemson University
Link to project

Deep-learning-based methods hold enormous promise in overcoming the challenges of complexity and real-time performance in various autonomous vehicle deployments. As exemplars of systems-of-systems, autonomous vehicles (AVs) engender numerous interconnected component-, subsystem- and system-level interactions. Our work explores the creation of operational frameworks to support end-to-end deep-learning-based autonomy deployments that can support verification and validation.
In particular, we are exploring the alternate autonomy frameworks: imitation-learning, Deep Reinforcement-Learning (DRL), and Deep-Koopman Reinforcement Learning (DKRL) with an eye to performance generalizability, scalability and robustness. Behavior cloning is a form of supervised imitation learning whose main motivation is to build a policy to mimic the human operator actions and support subsequent inferencing. While end-to-end Deep Neural Network (DNN) based policy encapsulation (e.g. mapping raw camera pixels to steering commands) can minimize dependence on feature engineering, it suffers from limited generalizability to environment changes. Deep Reinforcement Learning frameworks for learning complex model-based/model-free policies in high dimensional environments can leverage simulation/real-world data but require careful parameter/reward/algorithm selection to improve scalability, learning speed and performance. Deep-Koopman Reinforcement Learning approaches now blend the computational approximation benefits of DNNs with theoretical guarantees from dynamical systems and control in the realization of data-driven control. In particular, physics-aware yet data-driven approximation of the Koopman lifting functions via the DNNs facilitates systematic deployment of foundational linear system theory tools from optimal control to robust design with performance guarantees.
We will also highlight the operationalization within a scaled-vehicle framework (F1/10 vehicle ecosystem) that permits transitions between different test environments (simulated vs real) during the verification and validation of the deep-learning-based autonomy algorithms while also engaging a diverse group of autonomy researchers.

Cognitive Modeling of Human Operator Behavior during Interaction with Autonomous Systems

PI: Dr. Tulga Ersal, Associate Research Scientist, Mechanical Engineering, University of Michigan
Link to project

Haptic shared control is a semi-autonomous driving framework that promises a better shared control experience between a human operator and autonomy through a seamless control authority negotiation between the two agents. Understanding how a human interacts with autonomy in this paradigm and formulating this understanding mathematically can accelerate the development of haptic shared control technologies and improve human-autonomy negotiation. However, such a fundamental understanding and mathematical models that capture it are not available in the literature. Therefore, current development and validation of haptic shared control technologies depend on lengthy and expensive human subject tests, because a fully simulation-based alternative is not feasible without a computational human model for this paradigm.
This project aims to bridge this gap by investigating human steering behavior in and developing a computational human operator model for haptic shared control. We analyze the performance and behavior of humans as they collaborate with autonomy to perform a simulated path tracking task. From the experimental observations, we formulate hypotheses about the principles governing the human’s haptic negotiation with autonomy and formalize these hypotheses in a mathematical model built on the cognitive framework ACT-R (Adaptive Control of Thought-Rational).
In this talk we will focus on two operating conditions, one nominal and one in which autonomy has perception bias, and show how the model predicts human performance in them, with validation against human subject experiments.
This is the first model that offers such predictive capability and is an important step toward allowing fully-simulation based development and evaluation of haptic shared control technologies.

Language Communication and Collaboration with Autonomous Vehicles Under Unexpected Situations

PI: Dr. Joyce Chai, Professor of Computer Science and Engineering, University of Michigan
Link to project

In a human-autonomy team, particularly in the context of autonomous vehicles, the highly dynamic environment often leads to unexpected situations where pre-trained models or existing plans are not sufficient to handle these exceptions. What’s immediately available to vehicles is often only human operators. This raise an important question: how to enable collaboration between humans and vehicles to jointly handle these unexpected situations? To address this question, this project intends to empower autonomous vehicles with the ability to harness human knowledge and expertise and to enable natural language communication and collaboration in tackling unexpected situations. In this talk, we will give an update on our research progress. In particular, we will present a simulated environment and a novel interface that are developed to simulate a variety of exceptions and facilitate data collection.

December 10, Friday, 9:30-10:30am eastern time

Remote connection via Microsoft Teams. Contact William Lim (williamlim@umich.edu) for details.

Energy Management of Multi-Scale Vehicle Fleets

PI: Dr. Beshah Ayalew, Professor of Automotive Engineering; Nate Goulet, Ph.D. Candidate, Automotive Engineering, Clemson University
Link to project

In off-road mission environments, fleets of unmanned ground vehicles (UGVs) often operate with limited energy resources. In this project, mobile vehicle-borne microgrids are considered as charging hosts for teams of UGVs that facilitate sharing of energy resources among the UGV fleet. In this seminar, we first cover optimal energy utilization planning for individual UGVs and outline a hierarchical model-based algorithm that embeds detailed vehicle-deformable terrain interaction models as well as terrain topology information. The algorithm, which has similarities to rollout, uses off-line computed cost-to-go-function approximations as terminal heuristics appended to the horizon costs of a nonlinear model predictive controller acting as the local planner. The energy-saving benefits of this algorithm and its computational complexities will be discussed by comparing it with popular tracking formulations. We then discuss one formulation for the optimal multi-vehicle energy resource coordination problem. Therein, given designated safe charging zones, a priori assigned target task areas for each UGV, and energy-cost-to-go information for teams of UGVs operating in a mission area, and given designated safe charging zones, we formulate and solve for the optimal motion and deployment of both the host microgrid and the UGVs that minimizes collective energy utilization for the fleet. We present early results in this direction along with a discussion of inspirations and distinctions with other related works.

Materials Design of Polycarbonates at the Atomistic Scale with Machine Learning

PI: Dr. Christopher Barrett, Assistant Professor of Mechanical Engineering, Mississippi State University; Dr. Doyl Dickel, Assistant Professor; Mashroor Nitol, Ph.D. Candidate
Link to project

Polycarbonate (PC) materials are ubiquitous because of their excellent transparency, low weight, and good strength and impact resistance. Despite this, current models of their atomic scale material properties are not quantitatively reliable. In this project, we are developing a new atomistic model using machine learning to simulate polycarbonate at the atomic scale. As the first steps in this process, we have produced a new model for hydrocarbon molecules which captures the behavior of a wide range of molecules from methane to benzene rings to longer chains. The model is developed by assembling a large body of training data from density functional theory calculations of the energies of hydrocarbon molecules as a function of bond lengths and distortions, as well as similar data for pure hydrogen and pure carbon. The coordinate data for these points are refined into structural fingerprints which are smooth descriptors of the environment around each atom. A neural network is trained to compute the atomic energies from the fingerprints. Our results so far demonstrate that the neural network can stably simulate molecular dynamics for a variety of hydrocarbon molecules and extrapolates beyond the training data reliably.

Study of Tire-Mud Interaction and Modeling

ARC-ERA (Excellence in Research) Awardee: Varsha Swamy, PhD student, Virginia Tech
Advisors: Drs. Corina Sandu and Alba Yerro-Colom, Virginia Tech

The mobility of ground vehicles on wet or saturated cohesive soils (mud) is a challenge. Military vehicles, agricultural vehicles, and earthmoving vehicles operate in such hostile conditions very often. Depending on the history, the strength of clay in wet conditions may extremely decrease when subjected to rapid shear loading, leading to large deformations and eventually leading to liquid-like behavior and hydroplaning. We aim to create an advanced physics-based terrain model for what is commonly called mud, to account for the effect of the water in the pores of such cohesive fine grain soils, clays in particular. We will discuss tire–mud interaction by numerical modeling using Smoothed Particle Hydrodynamics and Finite Element Analysis (SPH-FEA) techniques and the experimental validation of the same. In this seminar, we present a good portion of the extensive literature review conducted and simulations that capture the short-term effects of the soil in undrained conditions. We will conclude the talk by discussing the preliminary results and emphasize the future work.