Local Descriptor for Robust Place Recognition Using LiDAR Intensity

Place recognition is a challenging problem in mobile robotics, especially in unstructured environments or under viewpoint and illumination changes. Most LiDAR-based methods rely on geometrical features to overcome such challenges, as generally scene geometry is invariant to these changes, but tend to affect camera-based solutions significantly. Compared to cameras, however, LiDARs lack the strong and descriptive appearance information that imaging can provide. To combine the benefits of geometry and appearance, we propose coupling the conventional geometric information from the LiDAR with its calibrated intensity return. This strategy extracts extremely useful information in the form of a new descriptor design, coined ISHOT, outperforming popular state-of-the-art geometric-only descriptors by significant margin in our local descriptor evaluation. To complete the framework, we furthermore develop a probabilistic keypoint voting place recognition algorithm, leveraging the new descriptor and yielding sublinear place recognition performance. The efficacy of our approach is validated in challenging global localization experiments in large-scale built-up and unstructured environments.

[1]  Mohammed Bennamoun,et al.  A Comprehensive Performance Evaluation of 3D Local Feature Descriptors , 2015, International Journal of Computer Vision.

[2]  Muhammad Sheraz Khan 3D Robotic Mapping and Place Recognition , 2017 .

[3]  Federico Tombari,et al.  Unique Signatures of Histograms for Local Surface Description , 2010, ECCV.

[4]  Piotr Jasiobedzki,et al.  Real-time lidar-based place recognition using distinctive shape descriptors , 2012, Defense, Security, and Sensing.

[5]  Abel Gawel,et al.  X-View: Graph-Based Semantic Multiview Localization , 2017, IEEE Robotics and Automation Letters.

[6]  Roland Siegwart,et al.  Comparison of nearest-neighbor-search strategies and implementations for efficient shape registration , 2012 .

[7]  Federico Tombari,et al.  Unique shape context for 3d data description , 2010, 3DOR '10.

[8]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[9]  Wolfram Burgard,et al.  Place recognition in 3D scans using a combination of bag of words and point feature based relative pose estimation , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  Zoltan-Csaba Marton,et al.  Tutorial: Point Cloud Library: Three-Dimensional Object Recognition and 6 DOF Pose Estimation , 2012, IEEE Robotics & Automation Magazine.

[11]  Sebastian Thrun,et al.  Unsupervised Calibration for Multi-beam Lasers , 2010, ISER.

[12]  Renaud Dubé,et al.  SegMatch: Segment based place recognition in 3D point clouds , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[13]  Günther Schmidt,et al.  Fusing range and intensity images for mobile robot localization , 1999, IEEE Trans. Robotics Autom..

[14]  Michael Bosse,et al.  Get Out of My Lab: Large-scale, Real-Time Visual-Inertial Localization , 2015, Robotics: Science and Systems.

[15]  Federico Tombari,et al.  A combined texture-shape descriptor for enhanced 3D feature matching , 2011, 2011 18th IEEE International Conference on Image Processing.

[16]  Sridha Sridharan,et al.  Robust Photogeometric Localization Over Time for Map-Centric Loop Closure , 2019, IEEE Robotics and Automation Letters.

[17]  Yu Zhong,et al.  Intrinsic shape signatures: A shape descriptor for 3D object recognition , 2009, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops.

[18]  Renaud Dubé,et al.  Delight: An Efficient Descriptor for Global Localisation Using LiDAR Intensities , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[19]  Denis Wolf,et al.  Road marking detection using LIDAR reflective intensity data and its application to vehicle localization , 2014, 17th International IEEE Conference on Intelligent Transportation Systems (ITSC).

[20]  Michael Bosse,et al.  Continuous 3D scan-matching with a spinning 2D laser , 2009, 2009 IEEE International Conference on Robotics and Automation.

[21]  Alberto Elfes,et al.  Environment-aware sensor fusion for obstacle detection , 2016, 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI).

[22]  Raquel Urtasun,et al.  Learning to Localize Using a LiDAR Intensity Map , 2018, CoRL.

[23]  Gim Hee Lee,et al.  PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[24]  Michael Bosse,et al.  Place recognition using keypoint voting in large 3D lidar datasets , 2013, 2013 IEEE International Conference on Robotics and Automation.

[25]  Roland Siegwart,et al.  Visual Place Recognition with Probabilistic Vertex Voting , 2016, ArXiv.

[26]  Wolfram Burgard,et al.  Maximum likelihood remission calibration for groups of heterogeneous laser scanners , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[27]  Nico Blodow,et al.  Fast Point Feature Histograms (FPFH) for 3D registration , 2009, 2009 IEEE International Conference on Robotics and Automation.

[28]  Abel Gawel,et al.  Point cloud descriptors for place recognition using sparse visual information , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[29]  Oskar von Stryk,et al.  Hector Open Source Modules for Autonomous Mapping and Navigation with Rescue Robots , 2013, RoboCup.

[30]  Siddharth Agarwal,et al.  Ground-Edge-Based LIDAR Localization Without a Reflectivity Calibration for Autonomous Driving , 2017, IEEE Robotics and Automation Letters.

[31]  Hang Dong,et al.  Into Darkness: Visual Navigation Based on a Lidar-Intensity-Image Pipeline , 2013, ISRR.