Visual-Inertial Localization With Prior LiDAR Map Constraints

In this letter, we develop a low-cost stereo visual-inertial localization system, which leverages efficient multi-state constraint Kalman filter (MSCKF)-based visual-inertial odometry (VIO) while utilizing an a priori LiDAR map to provide bounded-error three-dimensional navigation. Besides the standard sparse visual feature measurements used in VIO, the global registrations of visual semi-dense clouds to the prior LiDAR map are also exploited in a tightly-coupled MSCKF update, thus correcting accumulated drift. This cross-modality constraint between visual and LiDAR pointclouds is particularly addressed. The proposed approach is validated on both Monte Carlo simulations and real-world experiments, showing that LiDAR map constraints between clouds created through different sensing modalities greatly improve the standard VIO and provide bounded-error performance.

[1]  Sven Behnke,et al.  Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[2]  Wolfram Burgard,et al.  A benchmark for the evaluation of RGB-D SLAM systems , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  Wolfgang Straßer,et al.  Registration of colored 3D point clouds with a Kernel-based extension to the normal distributions transform , 2008, 2008 IEEE International Conference on Robotics and Automation.

[4]  Shaojie Shen,et al.  VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator , 2017, IEEE Transactions on Robotics.

[5]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Hiroshi Murase,et al.  Monocular localization within sparse voxel maps , 2017, 2017 IEEE Intelligent Vehicles Symposium (IV).

[7]  Joachim Hertzberg,et al.  Evaluation of 3D registration reliability and speed - A comparison of ICP and NDT , 2009, 2009 IEEE International Conference on Robotics and Automation.

[8]  Wei Zhang,et al.  Image Based Localization in Urban Environments , 2006, Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06).

[9]  Dezhen Song,et al.  Sharing Heterogeneous Spatial Knowledge: Map Fusion Between Asynchronous Monocular Vision and Lidar or Other Prior Inputs , 2019, ISRR.

[10]  Peter S. Maybeck,et al.  Stochastic Models, Estimation And Control , 2012 .

[11]  Paul Newman,et al.  Direct Visual Localisation and Calibration for Road Vehicles in Changing City Environments , 2015, 2015 IEEE International Conference on Computer Vision Workshop (ICCVW).

[12]  Sridha Sridharan,et al.  Robust Photogeometric Localization Over Time for Map-Centric Loop Closure , 2019, IEEE Robotics and Automation Letters.

[13]  Stergios I. Roumeliotis,et al.  A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[14]  Ryan M. Eustice,et al.  Visual localization within LIDAR maps for automated urban driving , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  Heiko Hirschmüller,et al.  Stereo Processing by Semiglobal Matching and Mutual Information , 2008, IEEE Trans. Pattern Anal. Mach. Intell..

[16]  Paul Newman,et al.  LAPS - localisation using appearance of prior structure: 6-DoF monocular camera localisation using prior pointclouds , 2012, 2012 IEEE International Conference on Robotics and Automation.

[17]  N. Trawny,et al.  Indirect Kalman Filter for 3 D Attitude Estimation , 2005 .

[18]  Y. Bar-Shalom Tracking and data association , 1988 .

[19]  Wolfram Burgard,et al.  Monocular camera localization in 3D LiDAR maps , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[20]  Hyun Myung,et al.  A Probabilistic Feature Map-Based Localization System Using a Monocular Camera , 2015, Sensors.

[21]  Oisin Mac Aodha,et al.  Unsupervised Monocular Depth Estimation with Left-Right Consistency , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Paul Newman,et al.  FARLAP: Fast robust localisation using appearance priors , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[23]  Juan D. Tardós,et al.  Visual-Inertial Monocular SLAM With Map Reuse , 2016, IEEE Robotics and Automation Letters.

[24]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[25]  Jinyong Jeong,et al.  Stereo Camera Localization in 3D LiDAR Maps , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[26]  Yan Lu,et al.  Monocular localization in urban environments using road markings , 2017, 2017 IEEE Intelligent Vehicles Symposium (IV).

[27]  A. B. Chatfield Fundamentals of high accuracy inertial navigation , 1997 .

[28]  Michael Bosse,et al.  Keyframe-based visual–inertial odometry using nonlinear optimization , 2015, Int. J. Robotics Res..

[29]  Eijiro Takeuchi,et al.  Localization based on multiple visual-metric maps , 2017, 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI).

[30]  Anastasios I. Mourikis,et al.  High-precision, consistent EKF-based visual-inertial odometry , 2013, Int. J. Robotics Res..

[31]  Roland Siegwart,et al.  The EuRoC micro aerial vehicle datasets , 2016, Int. J. Robotics Res..

[32]  Martin Magnusson,et al.  The three-dimensional normal-distributions transform : an efficient representation for registration, surface analysis, and loop detection , 2009 .

[33]  Ji Zhang,et al.  LOAM: Lidar Odometry and Mapping in Real-time , 2014, Robotics: Science and Systems.

[34]  Roland Siegwart,et al.  Point Clouds Registration with Probabilistic Data Association , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[35]  Davide Scaramuzza,et al.  A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[36]  Andrew Howard,et al.  Design and use paradigms for Gazebo, an open-source multi-robot simulator , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[37]  Stefan Schubert,et al.  Sampling-based methods for visual navigation in 3D maps by synthesizing depth images , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[38]  Ji Zhang,et al.  Visual-lidar odometry and mapping: low-drift, robust, and fast , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[39]  Ding Yuan,et al.  Cross-trees, edge and superpixel priors-based cost aggregation for stereo matching , 2015, Pattern Recognit..

[40]  Valérie Gouet-Brunet,et al.  A survey on Visual-Based Localization: On the benefit of heterogeneous data , 2018, Pattern Recognit..

[41]  Paul Newman,et al.  LAPS-II: 6-DoF day and night visual localisation with prior 3D structure for autonomous road vehicles , 2014, 2014 IEEE Intelligent Vehicles Symposium Proceedings.

[42]  Ji Zhang,et al.  Low-drift and real-time lidar odometry and mapping , 2017, Auton. Robots.

[43]  Renaud Dubé,et al.  Structure-based vision-laser matching , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).