Skip to main content
arc logo
Automotive Research Center
hero image
Back to all projects

Vehicle Controls & Behaviors

Annual Plan

Recognizing and Reconstructing Distorted and Obscured Perceptual Sensor Data Resulting from Soiling of the Sensor

Project Team

Principal Investigator

Daniel Carruth, Mississippi State University

Government

Paramsothy Jayakumar, U.S. Army GVSC

Industry

Hui Zhou, Oliver Jeromin, Rivian

Student

Jacob Kutch, Mississippi State University

Project Summary

In off-road operations, cameras will suffer from environmental effects such as raindrops, water spray, dirt and dust, mud, and snow that will distort or occlude sensor imagery. While hydrophobic and hydrophilic coatings and active sensor protection and cleaning systems (e.g., water or air jets) can help to avoid sensor soiling, they may offer only limited protection, have limited durability, require regular maintenance, or may themselves modify lens properties and distort imagery.

Current efforts to reconstruct distorted imagery have focused on the effects of rainfall and water drops, snowfall, and water spray. The research that has addressed the distorting effects of water and mud on the camera lens has focused on on-road driving. Public datasets that include camera data for adverse weather conditions often do not include ancillary data such as rainfall rate or other quantification of the obscuring materials and, for obvious reasons, do not include ground truth data for clean images.

Sensors will be soiled, damaged, and fail to function in the field. Algorithms developed and evaluated against pristine sensor data will be brittle in the face of adverse conditions. Datasets of soiled sensor data paired with ground truth annotations of occluded features or clean imagery are needed to train and evaluate algorithms. Modeling and simulation tools must be able to realistically represent the environmental effects on sensor data. Approaches must be devised to detect and adapt to sensor soiling.

We identify the following fundamental research questions to address:

  1. How to recover key occluded features quickly and accurately for downstream processing (e.g., feature detection, off-road trafficability, or visual odometry)?
  2. How to generate accurate synthetic soiled imagery datasets for training and evaluation of image cleaning algorithms?

This work proposes the following research objectives:

  1. Implement state-of-the-art deep learning models for cleaning imagery to support detection of critical features (e.g., lane markings, obstacles, off-road trafficability, and visual odometry) for autonomous driving in on-road and off-road environments.
  2. Create a benchmark dataset of real and synthetic camera imagery soiled by water and mud that includes ground truth data for annotations of key features and clean imagery.
  3. Develop physics-based simulation of partial and full occlusion of sensor imagery due to water and mud on the camera lens.

Publications from Prior Work closely related to the proposed project:

  1. L. Dabbiru, S. Sharma, C. Goodin, S. Ozier, C. Hudson, D.W. Carruth, M. Doude, G. Mason, and J.E. Ball, “Traversability mapping in off-road environment using semantic segmentation.” In Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2021, vol. 11748, p. 117480C. International Society for Optics and Photonics, 2021. DOI: 10.1117/12.2587661
  2. S. Sharma, B. Tang, J. E. Ball, D. W. Carruth, and L. Dabbiru, “Recursive multi-scale image deraining with sub-pixel convolution based feature fusion and context aggregation,” IEEE Access, 8, 2021. DOI:10.1109/ACCESS.2020.3024542
  3. Sharma, S., Goodin, C., Doude, M., Hudson, C. et al., “Understanding How Rain Affects Semantic Segmentation Algorithm Performance,” SAE Technical Paper 2020-01-0092, 2020, DOI: 10.4271/2020-01-0092
  4. C. Goodin, D. W. Carruth, M. Doude, and C. Hudson, “Predicting the influence of rain on LIDAR in ADAS,” Electronics, 8(1), 2019. DOI:10.3390/electronics8010089
  5. C. Goodin, S. Sharma, M. Doude, D.W. Carruth, L. Dabbiru, and C. Hudson, “Training of neural networks with automated labeling of simulated sensor data,” SAE Technical Paper 2019-01-0120, 2019, DOI: 10.4271/2019-01-0120
  6. S. Sharma, J.E. Ball, B. Tang, D.W. Carruth, M. Doude, and M.A. Islam, “Semantic segmentation with transfer learning for off-road autonomous driving,” Sensors, 19(11), 2019. DOI: 10.3390/s19112577

#1.38