Benchmarking Sampling-based Probabilistic Object Detectors

This paper provides the first benchmark for sampling-based probabilistic object detectors. A probabilistic objectdetector expresses uncertainty for all detections that reli-ably indicates object localisation and classification perfor-mance. We compare performance for two sampling-baseduncertainty techniques, namely Monte Carlo Dropout andDeep Ensembles, when implemented into one-stage andtwo-stage object detectors, Single Shot MultiBox Detectorand Faster R-CNN. Our results show that Deep Ensemblesoutperform MC Dropout for both types of detectors. We alsointroduce a new merging strategy for sampling-based tech-niques and one-stage object detectors. We show this novelmerging strategy has competitive performance with previ-ously established strategies, while only having one free pa-rameter

[1]  Kush R. Varshney,et al.  On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products , 2016, Big Data.

[2]  Pietro Perona,et al.  Microsoft COCO: Common Objects in Context , 2014, ECCV.

[3]  Gustavo Carneiro,et al.  Probabilistic Object Detection: Definition and Evaluation , 2020, 2020 IEEE Winter Conference on Applications of Computer Vision (WACV).

[4]  Steven L. Waslander,et al.  BayesOD: A Bayesian Approach for Uncertainty Estimation in Deep Object Detectors , 2019, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Niko Sünderhauf,et al.  Dropout Sampling for Robust Object Detection in Open-Set Conditions , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Michael Milford,et al.  Evaluating Merging Strategies for Sampling-based Uncertainty Techniques in Object Detection , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[7]  John Schulman,et al.  Concrete Problems in AI Safety , 2016, ArXiv.

[8]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.

[9]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[10]  Xiangyu Zhang,et al.  Bounding Box Regression With Uncertainty for Accurate Object Detection , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Alex Bewley,et al.  Dropout Distillation for Efficiently Estimating Model Confidence , 2018, ArXiv.

[12]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  Zoubin Ghahramani,et al.  Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.

[14]  Charles Blundell,et al.  Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.

[15]  Alex Kendall,et al.  What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.

[16]  Alois Knoll,et al.  Uncertainty Estimation for Deep Neural Object Detectors in Safety-Critical Applications , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[17]  Wolfram Burgard,et al.  The limits and potentials of deep learning for robotics , 2018, Int. J. Robotics Res..