SoilingNet: Soiling Detection on Automotive Surround-View Cameras

Cameras are an essential part of sensor suite in autonomous driving. Surround-view cameras are directly exposed to external environment and are vulnerable to get soiled. Cameras have a much higher degradation in performance due to soiling compared to other sensors. Thus it is critical to accurately detect soiling on the cameras, particularly for higher levels of autonomous driving. We created a new dataset having multiple types of soiling namely opaque and transparent. It will be released publicly as part of our WoodScape dataset [15] to encourage further research. We demonstrate high accuracy using a Convolutional Neural Network (CNN) based architecture. We also show that it can be combined with the existing object detection task in a multi-task learning framework. Finally, we make use of Generative Adversarial Networks (GANs) to generate more images for data augmentation and show that it works successfully similar to the style transfer.

[1]  John McDonald,et al.  Vision-Based Driver Assistance Systems: Survey, Taxonomy and Advances , 2015, 2015 IEEE 18th International Conference on Intelligent Transportation Systems.

[2]  Luc Van Gool,et al.  Semantic Foggy Scene Understanding with Synthetic Data , 2017, International Journal of Computer Vision.

[3]  Senthil Yogamani,et al.  Real-time Joint Object Detection and Semantic Segmentation Network for Automated Driving , 2019, ArXiv.

[4]  Raanan Fattal,et al.  Single image dehazing , 2008, ACM Trans. Graph..

[5]  Jae-Seok Choi,et al.  Fully End-to-End Learning Based Conditional Boundary Equilibrium GAN with Receptive Field Sizes Enlarged for Single Ultra-High Resolution Image Dehazing , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[6]  Jan Kautz,et al.  Multimodal Unsupervised Image-to-Image Translation , 2018, ECCV.

[7]  John McDonald,et al.  Computer vision in automated parking systems: Design, implementation and challenges , 2017, Image Vis. Comput..

[8]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[9]  Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles , 2022 .

[10]  Senthil Yogamani,et al.  NeurAll: Towards a Unified Model for Visual Perception in Automated Driving , 2019, ArXiv.

[11]  Shai Avidan,et al.  Non-local Image Dehazing , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Stefan Milz,et al.  WoodScape: A Multi-Task, Multi-Camera Fisheye Dataset for Autonomous Driving , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[13]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[14]  Paul Newman,et al.  I Can See Clearly Now: Image Restoration via De-Raining , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[15]  David Hurych,et al.  Yes, we GAN: Applying Adversarial Techniques for Autonomous Driving , 2019, Autonomous Vehicles and Machines.