Skip to main content
arc logo
Automotive Research Center
hero image

Technical Talk Abstracts

Day 1: Tuesday June 21

Session 1.A

Project 1.33
Trust-based Symbolic Motion and Task Planning for Multi-robot Bounding Overwatch
Yue "Sophie" Wang (PI), Huanfei Zheng (Clemson U.); Jonathan Smereka, Dariusz Mikulski (GVSC)

        In a human multi-robot collaborative bounding overwatch task, the autonomous robots provide overwatch protection to the human-operated robots and alternate their advance movements under potential adversaries and risks in offroad environments. Therefore, autonomous robots must explore trustworthy paths with overwatch points for the human-operated robots. To achieve this objective, we first develop a time-series computational model to capture the human’s dynamic trust evolution with respect to the environment-related attributes, such as robot traversability and visibility. A data-efficient Bayesian optimization-based interactive experiment design is developed to learn the computational trust model. We further develop a framework for provably correct symbolic motion and task planning of multi-robot systems (MRS) to perform complex bounding overwatch tasks constrained by temporal logic specifications. A Dijkstra search strategy explores the most trustworthy task and motion plan. Integrating the trust model and the temporal logic formulae can unambiguously encode human intent into robot motion behaviors. Robot simulations with humans-in-the-loop are implemented in ROS Gazebo to demonstrate the effectiveness of the proposed framework.

Project 2.14
Dynamic Task Allocation and Understanding of Situation Awareness Under Different Levels of Autonomy in Closed-Hatch Military Vehicles
Cindy L. Bethel (PI), Daniel W. Carruth (Co-PI), Jessie E. Cossitt, Viraj R. Patel (Grad. Students, Mississippi State U.) , Victor Paul (GVSC)

        With current autonomous vehicle capabilities, it is necessary for operators to remain engaged and monitor the system, intervening when necessary. This creates a need to better understand the interactions between operators and autonomous vehicle control systems in order to provide the best-case scenario for utilization of autonomous capabilities. Such an understanding could lead to the development of a system to dynamically allocate tasks in military missions to reduce crew sizes and thus reduce labor costs. The goal of this research so far has been to determine how increasing levels of autonomous capabilities in vehicles affect the operator’s situational awareness, cognitive load, and performance responding to road events as well as responding to other mission-related tasks at both constant and varying rates. In the current research phase, the knowledge that has been established of the interactions among these factors will be used to determine the best way to allocate tasks to crew members in missions where crew size has been reduced due to the utilization of autonomous vehicles.

Project 2.13
Optimal Distribution of Tasks in Human-Autonomy Teams
Haochen Wu, Charne Folks, A. Emrah Bayrak, Bogdan I. Epureanu (PI, U-M),Victor Paul, John Brabbs, Jonathon Smereka, Jillyn Alban (GVSC), Mert Egilmez

        In operations of multi-agent teams ranging from homogeneous robot swarms to heterogeneous human-autonomy teams, team members might experience computational or cognitive overload and operation effectiveness might drastically plummet facing unforeseen events. While efficiency of operation for multi-agent task allocation problems is the primary objective, it is essential that the team is intelligently designed to manage task loads with limited resources while operational strategies are learned efficiently. We present a design-while-learn framework for multi- agent teams to select and adjust team members and learn task allocation with consideration of load management through decentralized deep reinforcement learning. The load management encourages idling behaviors, avoids excessive resource waste, and identifies capability needs, allowing the removal and addition of team members with various capacity sets. We illustrate the effect of load management on team performance, explore agent behaviors in an example disaster relief scenario, and demonstrate the design-while-learn framework without compromising operation effectiveness.

Project 2.12
Cognitive Modeling of Human Operator Behavior during Interaction with Autonomous Systems
Chen Li (U-M, GSRA), Michael Cole (GVSC), Paramsothy Jayakumar (GVSC), James Poplawski (Arriver), Tulga Ersal (PI, U-M)

        The haptic shared control (HSC) paradigm promises a better shared control experience between a human driver and autonomy through continuous negotiation of control authority between the two agents. Understanding how a human operator interacts with autonomy in this paradigm can accelerate the development of HSC technologies and improve efficiency of human-autonomy negotiation. However, such a fundamental understanding and mathematical models that capture it are not available in the literature.
        This project fills this gap. We analyze the performance and behavior of humans as they collaborate with autonomy to perform a simulated path tracking task under various levels of controlled disagreements between the two agents. From experimental observations, we formulate hypotheses about the principles governing the human’s haptic interaction with autonomy, and formalize these hypotheses in a mathematical model built on the cognitive framework ACT-R. The model is parameterized without fitting to experimental data to achieve a predictive solution. In this talk, we show that the model can successfully predict the steering performance under non-critical disagreement. This is the first model that offers such predictive capability and is an important step toward allowing fully-simulation based development and evaluation of HSC technologies.

Project 2.16
Situated Dialogue for Handling Unexpected Situation in Autonomous Driving Agents
Ziqiao Ma, Ben VanDerPloeg, Cristian-Paul Bara, Yidong Huang, Eui-In Kim (U-M), Felix Gervits, Matthew Marge (ARL), Joyce Chai (PI, U-M)

        Real-world autonomous driving agents navigate in highly dynamic environments that are prone to unexpected situations. As humans are often the most reliable source for help in these situations, it's important for the vehicle to have the ability to communicate with humans through situated dialogue while navigating in a continuous and dynamic environment.  To this end, we have developed a high-fidelity simulated environment - Autonomous Driving Vehicle In Controllable Environment (ADVICE) - that can generate unexpected events on the fly to support empirical studies on situated communication with autonomous driving agents for exception handling. UsingADVICE, we are creating a fine grained vision-and-language navigation benchmark - REplanning Autonomous Driving Agent from Planned Trajectory (READAPT) - based on sessions of game play with human subjects. In this talk, we will give a brief introduction to ADVICE and READAPT and present some initial results on this benchmark.

Project 2.15
In-the-wild Question Answering: Toward Natural Human-Autonomy Interaction
Santiago Castro, Naihao Deng, Frank Huang, Mihai Burzo (Co-PI), Rada Mihalcea (PI) (U-M); Matt Castanier (DEVCOM GVSC); Glenn Taylor (Soar Tech)

        Situational awareness remains a major goal for both humans and machines as they interact with complex and dynamic environments. Awareness to unfolding situations allows for rapid reactions to events that take place in the surrounding environment, and enables more informed and thus better decision making. In this project, we address the task of in-the-wild multimodal question answering, in which an autonomous system visually charts a territory, and is able to answer questions about the entities, things, and events it has observed. We introduce WildQA, a Video QA benchmark of "in the wild" videos with corresponding questions and answers. Apart from answering questions, we also introduce the task of retrieving relevant parts in the video given a question (video evidence selection). We describe the process of compiling this benchmark, with contributions from both expert and non-expert users, and we test a wide range of baseline models on our proposed dataset. Our initial results show that WildQA poses new challenges to the research community, while addressing an important problem for situational awareness.


Session 1.B

Project 1.31
Comparison of Simulation and Physical Testing of Autonomous Ground Vehicles
D. Carruth, C. Goodin, L. Dabbiru, M. Moore, N. Scherrer, C. Hudson, L. Cagle (Mississippi State U.), and P. Jayakumar (GVSC)

        Many different factors including hardware, software, systems integration, and the environmental conditions may cause failures of autonomous ground vehicles. Diagnosing these failures is complicated by the fact that they may be infrequent and difficult to reproduce. We will present the results of a three-phase project to develop a simulation-based framework for the systematic evaluation of AGV software with metrics at both the system and subsystem level. Within the framework, perception and planning algorithms were tested within a modular AGV architecture to explore the sensitivity of algorithm performance to variations in sensor quality, environmental conditions, and operational requirements. Testing was performed using a physics-based simulator (the MSU Autonomous Vehicle Simulator, or MAVS) and compared to physical testing for a subset of conditions.

Project 1.35
Telerobotic Camera View-Frame Placement and Distributionally Risk-Receptive Network Interdiction Problems
Manish Bansal (PI) ,Sunghoon Park, Sumin Kang (Virginia Tech), Jonathon Smereka, Sam Kassoumeh (GVSC), Scott Corey (SIS)

        The objective of this project is to develop novel algorithms for operations and resiliency of the autonomous telerobotic surveillance and reconnaissance system or unmanned ground vehicles (UGVs) embedded with cameras, by solving a set of stochastic combinatorial optimization problems. More specifically, our goals are as follows. The telerobotic cameras system requires huge amount of data processing and storage units, despite the possibility of overlapping information (videos or images) provided by the cameras. We develop computationally efficient solution approaches to effectively manage this information such that it can be utilized for accomplishing search mission.
        We also propose novel mathematical frameworks to identify vulnerabilities in a logistics network for UGVs against attacks of an enemy with varying level of risk-appetite (risk-receptive to risk-averse). Such analysis are critical to make strategic long-term planning decisions. We introduce distributionally risk-receptive and risk-averse network interdiction problems where a leader maximizes a follower’s minimal expected objective value for the best-case and worst-case, respectively, probability distribution belonging to a given set of distributions. We present finitely convergent exact and approximation algorithms for them along with our computational results.

Project 1.36
Semantic Mapping in Dynamic Off-Road Environments
Joey Wilson, Jingyu Song, Yuewei Fu, Arthur Zhang (U-M), Andrew Capodieci, Paramsothy Jayakumar (GVSC), Kira Barton (Co-PI), and Maani Ghaffari (PI)

        While semantic mapping algorithms have achieved high levels of performance in static environments, most mapping algorithms fail in the presence of dynamic objects. Under a static assumption, dynamic objects leave behind traces which causes issues for applications such as mapping. In this talk, we present a method for semantic mapping in dynamic environments with closed-form Bayesian inference. Our method, Dynamic BKI <https://arxiv.org/abs/2108.03180>, leverages semantic segmentation and scene flow neural networks to infer a 3D semantic occupancy map. Next, we introduce a data set for dynamic semantic mapping formed from randomly sampled views of the world. We establish semantic scene completion baselines and construct a benchmark real-time dense local semantic mapping algorithm, MotionSC <https://arxiv.org/abs/2203.07060>. Our network shows that the proposed data set can quantify and supervise accurate scene completion in the presence of dynamic objects, which can lead to the development of improved dynamic mapping algorithms. Finally, we present avenues for future work by combining these methods to achieve higher levels of scene understanding and transferring the results on real vehicles.

Project 1.37, PI: Popa
Ultrasound-based perception in complex scenes using specialized convolutional neural networks
HyungSuk Kwon (PhD student), Paul Mohan (ZF), Paramsothy Jayakumar (GVSC), Bogdan Epureanu (Co-PI), Bogdan-Ioan Popa (PI, U-M)

        Echolocating animals such as bats and dolphins demonstrate that ultrasound can be used effectively to classify and locate objects in complex environments in scenarios where optics-based systems are ineffective, i.e., in fog, rain, snow. But how to replicate the performance of biosonar in artificial autonomous navigation systems is still an open question. In this talk, we present a method to classify and locate surrounding objects using specialized convolutional neural networks (SCNNs) that process ultrasound echoes coming from the environment. Our method draws inspiration from the biological world in that it uses a battery of neural networks specialized to perform very specific pattern recognition tasks such as identifying whether a specific object A exists in the scene or finding the location of object B given that B exists in the scene. Unlike competing methods relying on a single, large, global CNN, in our method SCNNs require relatively little data to train, are robust to noise in the input, and work well in cluttered scenes where objects are in close proximity. Our method is also modular, namely, recognizing a previously unknown object requires training a new SCNN without touching the previously trained SCNNs.

Project 1.30
Novel Data-Driven Algorithms for Autonomous Vehicle Path Planning Problems During Planning and Evaluation Stages
Saravanan Venkatachalam (PI), Venkata Sirimuvva Chirala (PhD Student, Wayne State University), Jonathon Smereka, Sam Kassoumeh (GVSC)

         This research focuses on solving offline high-level path planning problems for autonomous vehicles. We consider deterministic and stochastic variants of fuel-constrained mission planning problem with refueling stations and uncertainty in availability of unmanned vehicles (UVs). Furthermore, we extend the problem to a multi-objective, multiple-vehicle routing problem where manned vehicles (MVs) and UVs teams are deployed in a leader-follower framework while considering human-robot interactions (HRI). We present exact decomposition algorithms using outer and inner approximations, and variable neighborhood search heuristic approaches for larger instances. Further, we conduct extensive computational experiments, and evaluate the robustness of the developed models during evaluation stage. Data driven simulation studies are performed using robot operating system (ROS) framework to corroborate the approach, and the modules are available for use via cloud platform.

Project 5.20
Dynamic Teaming of Autonomous Vehicles to Improve Operational Effectiveness of Vehicle Fleets
Aabhaas Vaish (U-M), Xingyu Li (Ford), Chenyu Yi (Mercedes-Benz), Jonathon Smereka (GVSC) , Bogdan I. Epureanu (U-M)

        With the introduction of autonomy in the operation of vehicle fleets, it has become increasingly important to develop robust agent behaviors for the fleet especially in a rapidly evolving and uncertain environment. In addition, since the operation of an autonomous vehicle fleet system involves communication between the vehicles, it is imperative to develop efficient communication strategies that optimize the distribution of information between agents as well as the flow of information within the fleet. In this research, we developed a neural network- based inference model that uses spatiotemporal information about the adversary and the environment to predict future positions of the adversary, allowing the defending agents to optimize their future behavior accordingly. Furthermore, we demonstrate that using intelligent communication strategies in conjunction with this optimized individual behavior allows agents to reason with a global context – which increases collaboration within the vehicle fleet and leads to an overall increase in the operational effectiveness of the fleet.


Day 2: Wednesday June 22

Session 2.A

Project 3.18
Materials design of polycarbonates at the atomistic scale with machine learning
Christopher Barrett (PI), Doyl Dickel (Co-PI), Mashroor Nitol (graduate student, Mississippi State U.)

        Neural networks have proven to be an incredibly useful tool for the design of interatomic forcefield models. Using first principles results calculated using density functional theory, large training databases for different configurations of organic molecules can be used to generate fast and highly optimized potentials. However, as is often the case with machine learning, extrapolation beyond the existing dataset can be difficult. Physics informed neural networks (PINNs) mitigate some of this problem by building known physics relations directly into the neural network formalism. We have implemented these ideas in two ways: first by using known physics in the construction of the structural fingerprint which reduces the size and complexity of the network, and second by introducing an equation of state (EOS) which reproduces the known universal binding energy relationship for individual atomic bonds. The neural network result is then treated as a perturbation from this EOS which can be tightly bounded, reducing errors from the training set.

Project 2.A72/74
Assessing the Quality of Driving On and Off-Road Vehicles: Measures and Statistics of Driving Performance
Paul Green (PI, U-M)

        Advances in autonomy, artificial intelligence, and machine learning will change the battlefield and what Soldiers do. Accordingly, the Army has a major initiative to field Optionally Manned Fighting Vehicle/Robotic Combat Vehicles (OMFV/RCV). Supporting that initiative is the GVSC Crew Optimization & Augmentation Technologies (COAT) program. As part of COAT, the University has helped design, conduct, and analyze human factors experiments (2 on the GVSC CS/TBMS simulator, 1 at Camp Grayling), partnering with GVSC, DSC Corp, Texas A&M, and UCF. The University has also designed benchmark driving tasks, used in 2 experiments.
        To support consistent reporting of research results and allow studies to be compared, currently underway is a project to develop a military standard that defines measures and statics of driving performance (e.g., lane departure, gap, time to collision) and provides guidance for their selection and application, including representative data. This standard builds upon SAE Recommended Practice J2944 (171 pages), expanding the context to include (1) off-road, tracked vehicles, and other vehicle types, (2) controls other than steering wheels, and accelerators and brake pedals, (3) off-road driving and formation movements, (4) performance data published since 2016, and (5) other information.

Project 4.A87
Energy Efficiency Optimization and Control of a Fully Electric Off-Road Vehicle with Individual Wheel Drives
Masood Ghasemi, Vladimir Vantsevich, Lee Moradi, Jesse Paldan, Maddie Maddela (UAB), David Gorsich, Paramsothy Jayakumar (GVSC), Mostafa Salama (GM), Tom Canada (SCS)

        The advent of electric powertrains and in particular of in-wheel-motors (IWMs) enhanced the potentials for mobility performance, maneuverability, and energy efficiency of vehicles. The underlying potentials are enhanced agile dynamics characteristics, a more precise torque modulation, and control of all individual wheels and an overactuated system dynamics. An electric vehicle powered by permanent magnet synchronous motors (PMSMs) include dynamic redundancies in all individual powertrains as well as in chassis power distribution. Due to such redundancies, the vehicle is benefitted from an enhanced systemic energy efficiency, a characteristic feature independent of path planning and trajectory tracking control design of the vehicle. In this talk, the hierarchical vehicular control system design with a focus on the systemic energy efficiency optimization is discussed. In particular, the control system features a model free trajectory tracking design based on sliding mode control techniques, an optimal energy efficient force distribution among individual wheels, and an optimal energy efficient IWM powertrains’ control. The design is illustrated and analysed through some numerical simulations for a vehicle operating in off-road environments with stochastic and severe terrain conditions.

Project 3.A88
Technical Approaches and Analysis of Vehicle Conceptual Design for Mobility and Autonomous Mobility
Jordan A. Whitson, Vladimir Vantsevich (PI, UAB), David Gorsich (GVSC), Lee Moradi (UAB), Brian Butrico, Oleg Sapunkov, Michael Letherwood (GVSC)

        A generated vehicle database utilizes reputable market sources: OEM information, publications, and other cross-validated open sources. The database consists of the Conventional Armed Forces treaty definitions of vehicles, and a collection of an ever-growing expansion of tracked and wheeled, manned and unmanned vehicles as vehicle developments continue with the fast-paced technology environment. The database provides a detailed collection of general vehicle parameters, technical parameters, and engineering evaluation metrics that evaluate performance and capability. A similar-wheeled vehicle comparative analysis demonstrates the resourcefulness of the database, with developments under way of inspecting performance criteria to improve conceptual development of future, next-generation vehicles. A Morphing Triangle Profile is provided to improve representative insight to the traditional, coupled parameter nature between vehicle properties of efficiency, mobility, and maneuverability. The Triangle provides a capability domain for vehicles and will provide the basis of improving vehicular maneuverability with regards to the effects on mobility. The improvement of maneuverability can come with a cost to mobility. Potential thresholds of maneuverability at given slippage and velocity restrictions are also presented for the development of a Maneuver Limit Corridor for autonomous vehicles in Situational Movements.

Project 1.A90
Mobility Prediction of Off-Road Ground Vehicles Using a Dynamic Ensemble of NARX Models
Dakota Barthlow, Zissimos P. Mourelatos (PI, Oakland U.), Zhen Hu (PI, U-M-Dearborn), David Gorsich, Amandeep Singh (GVSC)

        The objective of this research is to develop easy-to-use and scalable data-driven mobility predictive models for off-road autonomous ground vehicles, with quantified prediction of uncertainty and assured prediction reliability. In the first year, we focus on building a data-driven mobility model using synthetically generated data. While data-driven models have great potential in mobility prediction, it is very challenging for a single model to accurately capture the complicated vehicle dynamics. With focus on vertical acceleration of an autonomous ground vehicle (AGV) under off-road conditions, we propose a surrogate modeling approach for mobility prediction using a dynamic ensemble of Nonlinear Autoregressive Network with Exogenous inputs (NARX) models over time. Synthetic mobility data are first collected using the Chrono Project and then partitioned into different segments to represent different vehicle dynamic behaviors. Based on the partition, multiple data-driven NARX models are constructed with different numbers of lags. The NARX models are then assembled dynamically over time to predict AGV mobility under new conditions. A case study demonstrates the advantages of the proposed method over the classical data-driven models for mobility prediction.

Projects 5.19/5.A71
Adversarial Scene Generation for Virtual Validation and Testing of Off-Road Autonomous Vehicle Performance
Ram Vasudevan (PI), Bogdan Epureanu (Co-PI), Ted Sender (PhD Student, U-M), Mark Brudnak, John Brabbs (GVSC), and Reid Steiger (Ford)

        Advancements in machine learning have allowed recent perception/control algorithms for autonomous vehicles (AVs) to rely on deep neural networks (DNNs). However, the complexity of modern autonomy stacks and the susceptibility of DNNs to subtle input perturbations exacerbates the challenge of evaluating the robustness of an autonomy stack under many natural operating conditions. Numerous simulation tools and algorithms have been created to efficiently explore and improve the robustness of machine learning algorithms for on-road AVs, however, only a few methods have demonstrated usefulness for the off-road domain. Our work aims to help fill this void by proposing a scalable reinforcement learning based approach for generating adversarial scenes for off-road AVs. By “adversarial” we mean that the scene is maximally problematic to navigate by the vehicle’s autonomy system while constrained to be realistic. We demonstrate that our proposed framework can generate pathological scenes against a custom autonomy system. We present studies that highlight various features of the automatically generated scenarios and their implications for AV design, testing, and validation. We will also discuss limitations of our framework and future work.

5.A71: Building Unreal Engine Scenes from Recorded Data
Mike Sasena, Nishant Singh (MathWorks)

        Many simulation tools provide means of creating scenes from map data such as HERE-HD, open street map, etc. However, when the region of interest is not properly mapped, for example off road areas particularly of interest to the Army, this method of creating scenes does not work. Additionally, these maps do not contain information about many of the important environmental features such as buildings, foliage, and even temporarily placed objects like parked vehicles. Often, these features are also important and require manual placement in the scene by the user. In this talk, we will demonstrate a tool created to help automate the process of creating scenes from real-world sensor data. This tool uses lidar, GPS/IMU, and, optionally, camera data to automatically build a map of a location, semi-automatically segment features of interest, and automatically create a scene in the Unreal gaming engine.


Session 2.B

Project 1.A73
Continuous-Variable Quantum Approximate Optimization Algorithm: Application to Ground Vehicle Offroad Mobility
Yabin Zhang, James Stokes (Presenter), Shravan Veerapaneni (PI, U-M), Jeremy Mange, Paramsothy Jayakumar, David Gorsich (GVSC)

        An important obstacle inhibiting the growth of quantum algorithm research is that the recent rapid acceleration of quantum technologies has outpaced the development of hardware-agnostic scalable simulation techniques, which are necessary for prototyping. Our work addresses this performance gap, by accelerating the continuous-variable quantum simulation techniques, leveraging the huge success of deep neural networks for solving high-dimensional learning tasks. Inspired by proposals for continuous-variable quantum approximate optimization algorithm, we investigated the utility of continuous-variable neural network quantum states (CV-NQS) for performing continuous optimization, focusing on the ground state optimization of the classical antiferromagnetic rotor model. Numerical experiments conducted using variational Monte Carlo with CV-NQS indicate that although the non-local algorithm succeeds in finding ground states competitive with the local gradient search methods, the proposal suffers from unfavorable scaling. We will describe several ongoing investigations to help alleviate the scaling difficulty. Looking ahead, acceleration of second-order solvers using variational quantum algorithms and the imposition of conic constraints resulting from frictional contacts will lead us to addressing the problem of ground vehicle offroad mobility simulations.

Project 1.A81
Tensor network approaches for fast and data efficient learning: applications to imitation learning from video data
Brian Chen, Doruk Aksoy (Presenter), Alex Gorodetsky (Co-PI), Shravan Veerapaneni (PI, U-M), David Gorsich (GVSC)

        We propose a computationally faster and more data efficient approach to solving the video-to-action learning problem arising in imitation learning. Our aim is to rapidly learn a player's gameplaying strategy, or soldier behavior in gamified environments, based on video data paired with actions. A common approach to this problem is to first learn a latent space for the video gameplay and then to build action predictors based on this latent space. Deep-learning approaches for identifying this latent space can suffer from large data and computational requirements to learn the latent-mapping. This difficulty is particularly acute in imitation learning applications where limited examples may exist. Instead, we propose a low-rank tensor approach for this video-to-latent space mapping. We discuss recent advances we have made in employing tensor-networks for learning generally, and we then apply it to the imitation learning problem. We show that it outperforms more standard deep learning-based autoencoder approaches in the low-data regime in both video reconstruction and action prediction accuracy. These benefits are achieved because the tensor-compression approach does not require extensive architectural and hyperparameter tuning that is needed for deep-learning approaches.

Project 1.A75
Deep Reinforcement Learning Approaches to CPS Vehicle Deployments
Venkat N. Krovi (PI), Melissa Smith, Umesh Vaidya, Phanindra Tallapragada, Feng Luo, Rahul Rai (Clemson U.), Denise Rizzo, David Gorsich, Jon Smereka, Mark Brudnak (GVSC), Karthik Krishnan (MSC)

        Modern day Cyber-Physical Systems (CPS) offer opportunities to extract mobility and information gathering in outdoor terrains while offering greater robustness and reliability. This “real-time Sense-Think-Act” performance emerges from merger of: (i) electromechanical vehicle platform capabilities; (ii) multi-modal spatio-temporal information gathering; coupled with (iii) size, weight and power (SWaP) constrained algorithmic-intelligence to realize  deployments. As exemplars of systems-of-systems, CPS Vehicles also engender numerous interconnected component-, subsystem- and system-level interactions. Intelligent orchestration/coordination of newly provisioned flexibility, from the myriad parameters, settings and behaviors, requires careful evaluation at both design stage and during operations.
        In such a milieu, deep reinforcement learning (DRL) based methods hold enormous promise in overcoming the challenges of complexity and real-time performance in CPS vehicle deployments. Our work explores the creation of operational frameworks to support end-to-end deep-learning-based autonomy deployments that can support verification and validation spanning evaluations of: (i) dynamical-systems frameworks to discover parameter-set ranges (and sensitivity); (ii) sensor-enhanced motion-planning and control algorithms; (iii) reduced-order sensor-intelligence models for vision/LIDAR; (iv) multimodal multitask semantic learning models; and (v) automated scenario generation/analysis pipelines; using deep learning methods/methodologies.

Project 5.A78
Computational Representation and Analysis of Mission and System Requirements
Chandan Sahu, Vinayak Khade, Mohan S. R. Elapolu, Guo Freeman, Margaret Wiecek, and Rahul Rai (PI, Clemson U.)

        The complexity and magnitude of requirements are ever-increasing with the advent of disruptive technologies such as autonomy and electrification of vehicles. Formal and systematic methods are needed for requirements analysis, representation, and traceability during the vehicle development process. Our research focuses on developing 1) AI (Artificial Intelligence) & NLP-based graph analysis methodologies, 2) ontology-based requirements representation, and 3) blockchain-based traceability models. We analyze the requirements by projecting them into a latent space and automatically generating graphs. A sequential machine learning model is developed to capture the semantic and syntactic dependencies in the requirement graphs. Further AI-based methods are being explored to establish links between the requirements. For a better understanding of requirements, we propose an ontology-based representation framework. The developed ontology acts as a framework's basis by capturing valuable information and relationships among the requirements. Finally, a unique traceability framework is presented that uses a simplified blockchain model to store the requirement engineering database. Graph-based traceability metamodels are extracted from these blockchain-based databases. The framework can establish traceability at the artifact level and the requirement level.

Project 5.A79
A Formal Ontology and Simulation Model Library to enable Model Reuse and Integration
Graduate student Researchers :Zhimin Chen ,Ryan Colletti, Minhal Hussein, Edward Louis, Andrew Montalbano, Evan Taylor; Faculty: Gregory Mocko, Gang Li, Bing Li, Chris Paredis, Atul Kelkar (Clemson U.)

        The research objective is to establish formal ontology as the basis for a simulation model library and to develop approaches to enable model reuse and integration. The development of next-generation ground vehicles systems on simulation models to predict vehicle performance, to conduct trade studies, and to support design verification and validation. In this work, we propose an ontology suite to model knowledge and meta-information describing simulation models. The ontology extends the Basic Formal Ontology (BFO) and the Common Core Ontologies (CCO). The ontology suite is composed of four domain ontologies: Vehicle operations (VehOps), Operational environment (Env), Ground vehicle architecture (VehArch), and Simulation model (SimMod) and an integration ontology. The ontology suite is developed in the Web Ontology Language (OWL) and is based on current modeling standards (i.e., SAE J2998 and SAE J3049). Examples of models are presented, and query approaches are discussed. Finally, a controller is proposed for integrating multi-domain, multi-physics, multi-scale, and multi-tool models. Examples are provided to demonstrate the challenges of model integration. Finally, the integration of the ontology suite and model integration is presented.

Project 5.A80
Exploring a Synthetic Tradespace through Decomposition and Coordination
Cameron J. Turner (PI), Georges Fadel, Rahul Rai, Yongjia Song, John Wagner, Margaret Wiecek (Co-PIs), Nafiseh Masoudi (Postdoc, Presenter, Clemson U.)

        The exploration of a tradespace is plays important roles in the design of complex systems. During the problem development stage, tradespace exploration demonstrates the feasibility of the design task with respect to performance, risk and program budget. This exploration is often performed before the design architecture is established and we refer to it as pre-architectural exploration. Later in the design process, a second post-architectural tradespace exploration that focuses on identifying the optimal technology solution set can also be beneficial for determining the design solution. In a typical high dimensional tradespace, determining the acceptable design tradeoffs can be a substantial challenge. Using a synthetic tradespace, this research demonstrates a decomposition and coordination approach that allows improved decision-making in pre- and post-architectural tradespace exploration. Furthermore, through the use of a synthetically developed tradespace model, a more complete understanding of the tradespace can be obtained for both exploration cases.