2024 ARC Research Seminar - Winter Series
Remote connection via Microsoft Teams. Contact William Lim (williamlim@umich.edu) for details.
February 16, Friday, 11:00am-12:00pm eastern time
Advancing Autonomous Ground Vehicles: A Systems Engineering Approach
A cluster of coordinated ARC projects at MIchigan State University
PIs: Mahmood Haq, Zhaojian Li, Nizar Lajnef, Chengcheng Fang, Shanelle Foster, John Papapolymerou, et al
Abstract: Maintaining a technological edge in autonomous mobility is critically important to meeting the Department of Defense’s mission of ensuring national security. While many technologies have been developed in the areas of autonomous vehicles, advanced materials, machine learning, computer vision, and human-machine interactions, these innovations are scattered, and their full potential remains unrealized. Until the army has ground vehicles that combine advances from across multiple mobility fields, it will be unlikely to attain the lifesaving benefits, cost efficiencies, and environmental advantages promised by today’s emerging vehicle technologies. Integrating these innovations into a prototype ground vehicle will enable the U.S. Army to develop a new generation of lightweight, integrated sensing, energy-efficient, and easy-to-repair autonomous vehicles that operate safely and stably under all load conditions.
The long-term goal of this work is to leverage the multi-disciplinary research expertise and infrastructure at MSU to build a prototype of an autonomous, all-terrain, lightweight vehicle platform that integrates the most promising new vehicle technologies from a range of fields for the very first time. This presentation will focus on the six (6) projects funded through the UM-ARC. These six projects cover the five core areas of (i) Controls and Stability, (ii) Lightweight Batteries, (iii) Drivetrain & Power electronics, (iv) RF Sensors & integration, and (v) Advanced Manufacturing and Lightweight Vehicle Structure Design. The presentation will focus on the progress thus far in the work and provide an overview of the synergies, successes, challenges, and path ahead.
Links to projects: 1.A107 | 1.A108 | 4.A109 | 4.A110 | 3.A111 | 3.A112
March 22, Friday, 11:00am-12:00pm eastern time
Mathematical Approaches for Learning From Gaming Data
PIs: Dr. Alex Gorodetsky, Dr. Shravan Veerapaneni (U. of Michigan)
project link
Abstract: We propose a computationally faster and more data efficient approach to solving the video-to-action learning problem arising in imitation learning. Our aim is to rapidly learn a player's game playing strategy, or soldier behavior in gamified environments, based on video data paired with actions. In previous work, we have demonstrated the effectiveness of using incrementally trained tensor networks to extract latent representations from data, which can then be used as input for behavioral cloning. In particular, the incrementally trained tensor network yields almost an order of magnitude reduction in computational requirements compared to neural network approaches like autoencoders and variational autoencoders. However, existing tensor network approaches can fall short at first-person-perspective games with more complex three dimensional visuals such as Minecraft. Due to these limitations, we developed an incremental Hierarchical Tucker decomposition algorithm, which will be the first in the literature. We find that this tensor network method leads to improved compression times compared to our previous work using an incremental tensor train, allowing us to tackle more complicated games. We present preliminary results of this approach, including visual reconstructions and using the extracted features for behavioral cloning on the MineRL competition dataset.
Virtual Experimentation for Soldier Evaluation of Autonomous and Non-Autonomous Technologies Using Multi-User Immersive Gaming Environments
PI: Dr. Wing-Yue Geoffrey Louie (Oakland U.)
project link
Abstract: Traditional development of ground vehicle technologies follows a sequential design, develop, and test waterfall model. This approach is costly with autonomous ground vehicles and robots because they are complex systems with many interacting sensors, actuators, algorithms, controls, and interfaces in addition to traditional automotive components. Due to the complexity of these systems, it also requires a significant amount of time to develop a complete physical prototype. Consequently, end-users are not introduced to a physical prototype until late in a project’s lifecycle. If the physical prototype fails to deliver the intended capability, the technology design process may have to start all over – potentially doubling the development cost and time and delaying the intended benefit.
Virtual experimentation using immersive gaming environments has the potential to dramatically change the physical prototype paradigm by introducing end-users to realistic virtual prototypes of technologies early in a project’s lifecycle. However, immersive gaming environments have originally been designed with entertainment in mind and consequently there are barriers to utilizing these technologies for virtual experimentation. Our research aims to address this gap and introduces preliminary results from two on-going projects. The first project focuses on rapidly prototyping crew station interfaces in VR, AR, and XR to support manned-unmanned teaming. The second project focuses on scaling post-interaction interviews utilizing social agents to interview end-users on their experiences with a novel technology.
April 5, Friday, 11:00am-12:00pm eastern time
PI: Dr. Nickolas Vlahopoulos (U. of Michigan) The overall objective of this project is developing a new unsupervised software testing approach. The term “unsupervised” indicates expecting minimal human effort in defining test cases and expected outcomes. Work completed during the first seven months of this effort is presented. Two simulation harnesses were established at UM. Both use the ROS2 similar to the Robotics Technology Kernel (RTK). The simulation harnesses are used for developing and demonstrating the new capabilities before transferring them to RTK. The public domain Autoware Universe system for operating autonomous vehicles comprises a comprehensive simulation harness comparable to RTK. A second, much simpler simulation harness, was also established in order to implement and demonstrate the new developments in a much faster turnaround time. An interceptor code and a code for monitoring the error metric evaluation are developed and placed within the software system which is tested. The former intercepts the data flow, alters it and publishes the altered data flow. The monitoring code identifies successful termination, inactivity, excessive run time and unexpected termination. The information from the monitoring code will be used for developing an error metric. A capability that can execute multiple runs for the simulation harness and for multiple generations has also been developed. PI: Dr. Bogdan Epureanu (U. Michigan) The engineering process of AI-powered autonomy involves conceptualization, designer input, intelligence empowerment, and performance experimentation. This process becomes especially challenging when developing multi-agent systems such as human-autonomy teaming due to complex interactions between agents and operation environments. Existing approaches either solve teaming strategies given pre-defined physical attributes of vehicles or find attributes that satisfy pre-defined strategies. In this work, we propose a more general approach to simultaneously co-design team physical attributes and teaming strategies. This approach consists of two methods. One method is an iterative heuristic-based process with designer feedback, and the other aims to co-evolve the design attributes and team behavior by genetic algorithms. We demonstrate the co-design methods in a multi-agent logistic operation considering vehicle delivery capability, route traversability, and constraints on physical attributes. This effort aims to automate the co-design process that cost-effectively leverages available resources and assets for maximum teaming effectiveness, and it also fast adapts to the expectations and changes in operation environments.Unsupervised Testing and Verification for Software Systems of Ground Autonomous Vehicles
project linkAutomated Co-Design of Vehicles and their Teaming Operations for Optimal Off-Road Performance
project link
April 19, Friday. 11:00am-12:00pm eastern time
add to calendar
PIs: Dr. Bogdan-Ioan Popa, Dr. Bogdan Epureanu (U. of Michigan) Biosonar research has demonstrated that ultrasound perception is effective at understanding and navigating complex environments. Echolocating animals can learn the acoustic representation of objects sequentially as the objects are encountered and can do so quickly and accurately by utilizing relatively few "training" echoes. Matching this performance in engineered systems is key to advancing autonomous and assisted navigation research. However, previous work on artificial perception violated these principles by almost exclusively relying on single, very deep convolutional neural networks (CNNs) requiring huge training sets. In contrast, this project showed that multiple specialized CNNs (SCNNs) acting in parallel maintain the aforementioned vital characteristics of biosonar. Here, we further show that the synthetically trained SCNNs can effectively process echoes measured in physical environments. This property is due to the careful preparation of the synthetic training data sets making the trained SCNNs robust to unavoidable high tolerances of the sound source, unpredictable noise, and changing absorption characteristics of air. We also show how analysis of the trained SCNNs provide salient echo features and their relative importance in the object recognition process. PI: Dr. Kon-Well Wang (U. of Michigan) The rapid advances in autonomous systems, such as automated vehicles, have demanded future adaptive structural and material systems to become even more intelligent. Such a need inspired us to advance from the conventional platform that relies mainly on add-on digital computers to achieve intelligence, to mechano-intelligence that embodies intelligence more directly in the mechanical domain. Although studies have attempted to achieve mechano-intelligence, there is a lack of a systematic foundation for constructing and integrating the various elements of mechano-intelligence, namely perception, learning, and decision-making, with sensory input and execution output for engineering functions. In this study, we lay down this foundation by harnessing the physical computation concept, advancing from mere computing to multifunctional mechano-intelligence. As exemplar testbeds, we constructed mechanically intelligent metastructures to achieve wave adaptation via physical computing, and uncover multiple engineering functions, ranging from noise and vibration controls, wave logic gates, to phononic communication. This research will pave the path to autonomous structures that would surpass the state of the art, with lower power consumption, more direct interactions, and much better survivability in harsh environment and under cyberattack.Synthetically Trained Ultrasound Perception System Tested in Physical Environments
project linkAdaptive Structures with Embedded Autonomy for Advancing Ground Vehicles
project link