LIC-Fusion: LiDAR-Inertial-Camera Odometry

This paper presents a tightly-coupled multi-sensor fusion algorithm termed LiDAR-inertial-camera fusion (LIC-Fusion), which efficiently fuses IMU measurements, sparse visual features, and extracted LiDAR points. In particular, the proposed LIC-Fusion performs online spatial and temporal sensor calibration between all three asynchronous sensors, in order to compensate for possible calibration variations. The key contribution is the optimal (up to linearization errors) multi-modal sensor fusion of detected and tracked sparse edge/surf feature points from LiDAR scans within an efficient MSCKF-based framework, alongside sparse visual feature observations and IMU readings. We perform extensive experiments in both indoor and outdoor environments, showing that the proposed LIC-Fusion outperforms the state-of-the-art visual-inertial odometry (VIO) and LiDAR odometry methods in terms of estimation accuracy and robustness to aggressive motions.

[1]  Ji Zhang,et al.  Real-time depth enhanced monocular odometry , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Sridha Sridharan,et al.  Probabilistic Surfel Fusion for Dense LiDAR Mapping , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).

[3]  John J. Leonard,et al.  Optimal-State-Constraint EKF for Visual-Inertial Navigation , 2015, ISRR.

[4]  Gene H. Golub,et al.  Matrix computations , 1983 .

[5]  Brendan Englot,et al.  LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[6]  Juan D. Tardós,et al.  Visual-Inertial Monocular SLAM With Map Reuse , 2016, IEEE Robotics and Automation Letters.

[7]  Anastasios I. Mourikis,et al.  Online temporal calibration for camera–IMU systems: Theory and algorithms , 2014, Int. J. Robotics Res..

[8]  Martin Lauer,et al.  LIMO: Lidar-Monocular Visual Odometry , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[9]  Ji Zhang,et al.  LOAM: Lidar Odometry and Mapping in Real-time , 2014, Robotics: Science and Systems.

[10]  Wolfram Burgard,et al.  A benchmark for the evaluation of RGB-D SLAM systems , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Guoquan Huang,et al.  Robocentric visual–inertial odometry: , 2019 .

[12]  Cyrill Stachniss,et al.  Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments , 2018, Robotics: Science and Systems.

[13]  Ayoung Kim,et al.  Direct Visual SLAM Using Sparse Depth for Camera-LiDAR System , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[14]  Roland Siegwart,et al.  Keyframe-Based Visual-Inertial SLAM using Nonlinear Optimization , 2013, Robotics: Science and Systems.

[15]  Shaojie Shen,et al.  VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator , 2017, IEEE Transactions on Robotics.

[16]  Stergios I. Roumeliotis,et al.  A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[17]  N. Trawny,et al.  Indirect Kalman Filter for 3 D Attitude Estimation , 2005 .

[18]  Ji Zhang,et al.  Visual-lidar odometry and mapping: low-drift, robust, and fast , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[19]  Ji Zhang,et al.  Laser–visual–inertial odometry and mapping with high robustness and low drift , 2018, J. Field Robotics.

[20]  M. V. Kreveld Computational Geometry , 2000, Springer Berlin Heidelberg.