How to Improve Object Detection in a Driver Assistance System Applying Explainable Deep Learning

Reliable perception and detection of objects are one of the fundamental aspects of vehicle autonomy. Although model-based approaches perform well in the area of planning and control, they often fail when applied to perception due to the open-world nature of problems for autonomous vehicles. Therefore, data-driven approaches to object detection and location are likely to be used in both self-driving cars and advanced driver assistance systems. In particular, the deep neural networks proved to be excellent in detection and classification of objects from images, often achieving super-human performance. However, neural networks applied in intelligent vehicles need to be explainable, providing rationales for their decisions. In this paper, we demonstrate how such an interpretation can be provided for a deep learning system that detects specific objects (charging posts) for driver assistance in an electric bus. The interpretation, achieved by visualization of attention heat maps, has twofold use: it allows us to augment the dataset used for training, improving the results, but it also may be used as a tool when fielding the system with the given bus operator. Explaining which parts of the images triggered the decision helps to eliminate misdetections.

[1]  Trevor Darrell,et al.  Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Sebastian Ramos,et al.  The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Michal R. Nowicki,et al.  Laser-Based Localization and Terrain Mapping for Driver Assistance in a City Bus , 2019, AUTOMATION.

[4]  Matthew Johnson-Roberson,et al.  Failing to Learn: Autonomously Identifying Perception Failures for Self-Driving Cars , 2017, IEEE Robotics and Automation Letters.

[5]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Alberto L. Sangiovanni-Vincentelli,et al.  Counterexample-Guided Data Augmentation , 2018, IJCAI.

[7]  Maciej Marcin Michalek,et al.  The Concept of Passive Control Assistance for Docking Maneuvers With N-Trailer Vehicles , 2015, IEEE/ASME Transactions on Mechatronics.

[8]  Quanshi Zhang,et al.  Visual interpretability for deep learning: a survey , 2018, Frontiers of Information Technology & Electronic Engineering.

[9]  Veronika Zolnercíková Homologation of Autonomous Machines from a Legal Perspective , 2018, XAILA@JURIX.

[10]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Kaiming He,et al.  Mask R-CNN , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[12]  John F. Canny,et al.  Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[13]  Mariusz Bojarski,et al.  VisualBackProp: visualizing CNNs for autonomous driving , 2016, ArXiv.

[14]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[15]  John F. Canny,et al.  Deep Traffic Light Detection for Self-driving Cars from a Large-scale Dataset , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).