Vehicle Controls & Behaviors
Annual PlanAdaptive and Efficient Perception for Autonomous Ground Vehicles Operating in Highly Stochastic Environments under Sensing Uncertainties
Project Team
Government
Jonathon Smereka, US Army GVSC
Faculty
Pramod Khargonekar, UCI
Industry
Marco Levorato, FLexAI
Student
Trier Mortlock, UCI
Project Summary
Project begins 2023.
The Army often operates in safety-critical capacities where improvements in perception have the ability to enhance warfighter capabilities through an increased understanding of operational environments. During forward operations, off-road autonomous vehicles (AVs) can face numerous challenges in perception from various domains. Off-road AVs typically use multiple sensors, large deep-learning models, and powerful hardware platforms to perceive the environment and navigate safely at high speeds with strict latency requirements. During AV operations in challenging, stochastic environments, some sensing modalities negatively impact perception while increasing energy consumption and latency which can be detrimental to safety-critical AV missions. Existing methods are insufficiently robust in these harsh operating environments (e.g., extreme terrain, high speeds, bad weather, low light, sensor obstructions) due to (i) rigidity in their fusion implementations, (ii) their inability to model contextual information in real-time, and (iii) the high levels of sensor input uncertainties present [1]. Adaptive sensor fusion methods can dynamically adjust and thus overcome the above challenges by leveraging their learning-based understanding of the environment, thereby enabling higher levels of perception performance, energy efficiency, and situational awareness.
In this research project, our overarching goal is to improve the perception performance of autonomous vehicles by introducing new methods of sensor fusion which include mechanisms for adaptation. Specifically, we aim to study how varying levels of computation constraints and sensor uncertainties due to operational requirements can propagate through sensor fusion-based perception frameworks and how we can propose new methods to account for these dynamic variabilities.
We aim to solve the following fundamental research questions (RQs):
(RQ1) How can sensor fusion be applied adaptively and efficiently for AV perception systems operating at high speeds in difficult terrain? When an AV operates in unseen and dynamic environments, its perception method must adapt in real-time by switching between sensor configurations (e.g., camera, radars, lidars) while also considering operational power constraints. This will be the first work to characterize the energy efficiency and perception performance of an adaptive sensor fusion approach as a function of speed and terrain.
(RQ2) How can the situational awareness of an AV be improved through modeling contextual information via sensor inputs? By modeling the surrounding environment, an AV can be further aware of nearby actors and general operating conditions. Specifically, we attempt to model a scene with respect to features that illuminate when a different sensor or fusion method would perform better in the perception task. We will perform feature importance studies to examine what aspects of contextual scene understanding are important in enabling intelligent-autonomous decisions for perception.
(RQ3) How can hardware complexities be incorporated into adaptive sensor fusion for AV perception? Energy and computation constraints can affect sensing abilities in the frequency and resolution of data. Additionally, sensor failures caused by rugged environments can contribute to hardware complexities. We will propose novel methods to incorporate (a) system-level signature constraints and (b) sensor-level utility per sensing environment into our adaptive fusion algorithms and contextual modeling of scenes.
(RQ4) How might the uncertainties of sensor inputs affect and propagate through an AV perception system operating in highly stochastic environments? Uncertainty propagation is a fundamental challenge that can drive a perception system to unstable operating states. We will conduct a performance analysis of our proposed methods under various levels of sensing uncertainties and attempt to characterize sensor fusion model performance as a function of these uncertainties.
References:
[1] Malawade, Arnav Vaibhav, Trier Mortlock, and Mohammad Abdullah Al Faruque. “HydraFusion: Context-aware selective sensor fusion for robust and efficient autonomous vehicle perception.” 2022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS). IEEE, 2022.
#1.39