Probably Unknown: Deep Inverse Sensor Modelling Radar

Radar presents a promising alternative to lidar and vision in autonomous vehicle applications, able to detect objects at long range under a variety of weather conditions. However, distinguishing between occupied and free space from raw radar power returns is challenging due to complex interactions between sensor noise and occlusion. To counter this we propose to learn an Inverse Sensor Model (ISM) converting a raw radar scan to a grid map of occupancy probabilities using a deep neural network. Our network is selfsupervised using partial occupancy labels generated by lidar, allowing a robot to learn about world occupancy from past experience without human supervision. We evaluate our approach on five hours of data recorded in a dynamic urban environment. By accounting for the scene context of each grid cell our model is able to successfully segment the world into occupied and free space, outperforming standard CFAR filtering approaches. Additionally by incorporating heteroscedastic uncertainty into our model formulation, we are able to quantify the variance in the uncertainty throughout the sensor observation. Through this mechanism we are able to successfully identify regions of space that are likely to be occluded.

[1]  Stewart Worrall,et al.  Towards mapping of dynamic environments with FMCW radar , 2013, 2013 IEEE Intelligent Vehicles Symposium (IV).

[2]  Alberto Elfes,et al.  Using occupancy grids for mobile robot perception and navigation , 1989, Computer.

[3]  Sebastian Thrun,et al.  Probabilistic robotics , 2002, CACM.

[4]  Adam Milstein,et al.  Occupancy Grid Maps for Localization and Mapping , 2008 .

[5]  Jens Klappstein,et al.  Automotive radar gridmap representations , 2015, 2015 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM).

[6]  Xiaowei Zhou,et al.  Polar Transformer Networks , 2017, ICLR.

[7]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[8]  Luc Van Gool,et al.  The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.

[9]  Kurt Konolige,et al.  Improved Occupancy Grids for Map Building , 1997, Auton. Robots.

[10]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[11]  Alex Kendall,et al.  What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.

[12]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[13]  David Filliat,et al.  Map-based navigation in mobile robots: II. A review of map-learning and path-planning strategies , 2003, Cognitive Systems Research.

[14]  Hugh F. Durrant-Whyte,et al.  An evidential approach to map-building for autonomous vehicles , 1998, IEEE Trans. Robotics Autom..

[15]  Daan Wierstra,et al.  Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.

[16]  Zhidong Deng,et al.  Recent progress in semantic image segmentation , 2018, Artificial Intelligence Review.

[17]  Nasser M. Nasrabadi,et al.  Pattern Recognition and Machine Learning , 2006, Technometrics.

[18]  Honglak Lee,et al.  Learning Structured Output Representation using Deep Conditional Generative Models , 2015, NIPS.

[19]  Sebastian Thrun,et al.  Learning Occupancy Grid Maps with Forward Sensor Models , 2003, Auton. Robots.

[20]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[21]  Sebastian Thrun,et al.  Exploration and model building in mobile robot domains , 1993, IEEE International Conference on Neural Networks.

[22]  Julien Mottin,et al.  Evaluation of Occupancy Grid Resolution through a Novel Approach for Inverse Sensor Modeling , 2017 .

[23]  Jean-Arcady Meyer,et al.  Map-based navigation in mobile robots: I. A review of localization strategies , 2003, Cognitive Systems Research.

[24]  Ba-Ngu Vo,et al.  Robotic Navigation and Mapping with Radar , 2012 .

[25]  Yuichiro Hayashi,et al.  Deep learning and its application to medical image segmentation , 2018, ArXiv.