Airborne Particle Classification in LiDAR Point Clouds Using Deep Learning

LiDAR sensors have been very popular in robotics due to their ability to provide accurate range measurements and their robustness to lighting conditions. However, their sensitivity to airborne particles such as dust or fog can lead to perception algorithm failures (e.g. the detection of false obstacles by field robots). In this work, we address this problem by proposing methods to classify airborne particles in LiDAR data. We propose and compare two deep learning approaches, the first is based on voxel-wise classification, while the second is based on point-wise classification. We also study the impact of different combinations of input features extracted from LiDAR data, including the use of multi-echo returns as a classification feature. We evaluate the performance of the proposed methods on a realistic dataset with the presence of fog and dust particles in outdoor scenes. We achieve an F1 score of 94% for the classification of airborne particles in LiDAR point clouds, thereby significantly outperforming the state-of-the-art. We show the practical significance of this work on two real-world use cases: a relative pose estimation task using point cloud matching, and an obstacle detection task. The code and dataset used for this work are available online.

[1]  Simon Lacroix,et al.  Improving LiDAR point cloud classification using intensities and multiple echoes , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[2]  Yin Zhou,et al.  VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[3]  Wolfram Burgard,et al.  Improving robot navigation in structured outdoor environments by identifying vegetation from laser data , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Nicky Guenther,et al.  When the Dust Settles: The Four Behaviors of LiDAR in the Presence of Fine Airborne Particulates , 2017, J. Field Robotics.

[5]  Thierry Peynot,et al.  The Marulan Data Sets: Multi-sensor Perception in a Natural Environment with Challenging Conditions , 2010, Int. J. Robotics Res..

[6]  Thierry Peynot,et al.  Laser-camera data discrepancies and reliable perception in outdoor robotics , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Leonidas J. Guibas,et al.  PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Renaud Dubé,et al.  SegMap: 3D Segment Mapping using Data-Driven Descriptors , 2018, Robotics: Science and Systems.

[9]  Niko Sünderhauf,et al.  Lidar-based detection of airborne particles for robust robot perception , 2018, ICRA 2018.

[10]  Ji Wan,et al.  Multi-view 3D Object Detection Network for Autonomous Driving , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Sebastian Thrun,et al.  Robust vehicle localization in urban environments using probabilistic maps , 2010, 2010 IEEE International Conference on Robotics and Automation.

[12]  Leonidas J. Guibas,et al.  PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space , 2017, NIPS.

[13]  Paul E. Rybski,et al.  Obstacle Detection and Tracking for the Urban Challenge , 2009, IEEE Transactions on Intelligent Transportation Systems.

[14]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[15]  Sebastian Thrun,et al.  Junior: The Stanford entry in the Urban Challenge , 2008, J. Field Robotics.

[16]  Raquel Urtasun,et al.  Efficient Convolutions for Real-Time Semantic Segmentation of 3D Point Clouds , 2018, 2018 International Conference on 3D Vision (3DV).

[17]  William Whittaker,et al.  Autonomous driving in urban environments: Boss and the Urban Challenge , 2008, J. Field Robotics.

[18]  Andreas Zell,et al.  3D LIDAR- and Camera-Based Terrain Classification Under Different Lighting Conditions , 2012, AMS.

[19]  Martial Hebert,et al.  Natural terrain classification using three‐dimensional ladar data for ground robot mobility , 2006, J. Field Robotics.

[20]  Sébastien Ourselin,et al.  Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations , 2017, DLMIA/ML-CDS@MICCAI.

[21]  Kazunori Ohno,et al.  Fog removal using laser beam penetration, laser intensity, and geometrical features for 3D measurements in fog-filled room , 2016, Adv. Robotics.

[22]  Marcos P. Gerardo-Castro,et al.  Non-parametric consistency test for multiple-sensing-modality data fusion , 2015, 2015 18th International Conference on Information Fusion (Fusion).

[23]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[24]  Alberto Elfes,et al.  Using occupancy grids for mobile robot perception and navigation , 1989, Computer.

[25]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[26]  Wolfram Burgard,et al.  Terrain-adaptive obstacle detection , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).