Robustness Enhancement of Object Detection in Advanced Driver Assistance Systems (ADAS)

A unified system integrating a compact object detector and a surrounding environmental condition classifier for enhancing the robustness of object detection scheme in advanced driver assistance systems (ADAS) is proposed in this paper. ADAS are invented to improve traffic safety and effectiveness in autonomous driving systems where the object detection plays an extremely important role. However, modern object detectors integrated in ADAS are still unstable due to high latency and the variation of the environmental contexts in the deployment phase. Our system is proposed to address the aforementioned problems. The proposed system includes two main components: (1) a compact one-stage object detector which is expected to be able to perform at a comparable accuracy compared to state-of-the-art object detectors, and (2) an environmental condition detector that helps to send a warning signal to the cloud in case the self-driving car needs human actions due to the significance of the situation. The empirical results prove the reliability and the scalability of the proposed system to realistic scenarios. Keywords—ADAS, object detection, autonomous driving, deep learning, intelligent systems.

[1]  Feras Dayoub,et al.  Online Monitoring of Object Detection Performance Post-Deployment , 2020, ArXiv.

[2]  In-So Kweon,et al.  CBAM: Convolutional Block Attention Module , 2018, ECCV.

[3]  Judy Hoffman,et al.  Predictive Inequity in Object Detection , 2019, ArXiv.

[4]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  My-Ha Le,et al.  Robust U-Net-based Road Lane Markings Detection for Autonomous Driving , 2019, 2019 International Conference on System Science and Engineering (ICSSE).

[6]  Trevor Darrell,et al.  BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling , 2018, ArXiv.

[7]  W. Marsden I and J , 2012 .

[8]  Enhua Wu,et al.  Squeeze-and-Excitation Networks , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Adam Ziebinski,et al.  A Survey of ADAS Technologies for the Future Perspective of Sensor Fusion , 2016, ICCCI.

[10]  My-Ha Le,et al.  A Vision-based Method for Autonomous Landing on a Target with a Quadcopter , 2018, 2018 4th International Conference on Green Technology and Sustainable Development (GTSD).

[11]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Ali Farhadi,et al.  YOLOv3: An Incremental Improvement , 2018, ArXiv.

[13]  My-Ha Le,et al.  Real-Time Self-Driving Car Navigation Using Deep Neural Network , 2018, 2018 4th International Conference on Green Technology and Sustainable Development (GTSD).

[14]  Sebastian Ramos,et al.  The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Ali Farhadi,et al.  YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).