F-LOAM : Fast LiDAR Odometry and Mapping

Simultaneous Localization and Mapping (SLAM) has wide robotic applications such as autonomous driving and unmanned aerial vehicles. Both computational efficiency and localization accuracy are of great importance towards a good SLAM system. Existing works on LiDAR based SLAM often formulate the problem as two modules: scan-to-scan match and scan-to-map refinement. Both modules are solved by iterative calculation which are computationally expensive. In this paper, we propose a general solution that aims to provide a computationally efficient and accurate framework for LiDAR based SLAM. Specifically, we adopt a non-iterative two-stage distortion compensation method to reduce the computational cost. For each scan input, the edge and planar features are extracted and matched to a local edge map and a local plane map separately, where the local smoothness is also considered for iterative pose optimization. Thorough experiments are performed to evaluate its performance in challenging scenarios, including localization for a warehouse Automated Guided Vehicle (AGV) and a public dataset on autonomous driving. The proposed method achieves a competitive localization accuracy with a processing rate of more than 10 Hz in the public dataset evaluation, which provides a good trade-off between performance and computational cost for practical applications. It is one of the most accurate and fastest open-sourced SLAM systems1 in KITTI dataset ranking.

[1]  Damien Vivet,et al.  A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping , 2020, Sensors.

[2]  Emanuele Menegatti,et al.  A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement , 2019, International Journal of Advanced Robotic Systems.

[3]  Brendan Englot,et al.  LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Kamal Youcef-Toumi,et al.  Ultra-Wideband Radar for Robust Inspection Drone in Underground Coal Mines , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Jean-Emmanuel Deschaud,et al.  IMLS-SLAM: Scan-to-Model Matching Based on 3D Data , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Rongbing Li,et al.  LIDAR/MEMS IMU integrated navigation (SLAM) method for a small UAV in indoor environments , 2014, 2014 DGON Inertial Sensors and Systems (ISS).

[8]  J. O’Kane A Gentle Introduction to ROS , 2016 .

[9]  Martin Lauer,et al.  LIMO: Lidar-Monocular Visual Odometry , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[10]  Le Zhang,et al.  Robust visual tracking via co-trained Kernelized correlation filters , 2017, Pattern Recognit..

[11]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[12]  Jürgen Schmidhuber,et al.  Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction , 2011, ICANN.

[13]  Shaojie Shen,et al.  VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator , 2017, IEEE Transactions on Robotics.

[14]  Ji Zhang,et al.  Low-drift and real-time lidar odometry and mapping , 2017, Auton. Robots.

[15]  Wei Wang,et al.  LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[16]  Andras Majdik,et al.  LOL: Lidar-only Odometry and Localization in 3D point cloud maps* , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[17]  Jörg Stückler,et al.  Large-scale direct SLAM with stereo cameras , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[18]  Masaru Ogawa,et al.  Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle † , 2018, Sensors.

[19]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[20]  Hao Wang,et al.  SpSequenceNet: Semantic Segmentation Network on 4D Point Clouds , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[21]  Juha Hyyppä,et al.  CAE-LO: LiDAR Odometry Leveraging Fully Unsupervised Convolutional Auto-Encoder for Interest Point Detection and Feature Description , 2020, ArXiv.

[22]  Senthil Yogamani,et al.  Visual SLAM for Automated Driving: Exploring the Applications of Deep Learning , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[23]  Wolfram Hardt,et al.  Improving the intrinsic calibration of a Velodyne LiDAR sensor , 2017, 2017 IEEE SENSORS.

[24]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.

[25]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[26]  Ji Zhang,et al.  LOAM: Lidar Odometry and Mapping in Real-time , 2014, Robotics: Science and Systems.

[27]  Shaojie Shen,et al.  A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors , 2019, ArXiv.

[28]  Wolfgang Hess,et al.  Real-time loop closure in 2D LIDAR SLAM , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).