PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features

To address the problem of estimating camera trajectory and to build a structural three-dimensional (3D) map based on inertial measurements and visual observations, this paper proposes point–line visual–inertial odometry (PL-VIO), a tightly-coupled monocular visual–inertial odometry system exploiting both point and line features. Compared with point features, lines provide significantly more geometrical structure information on the environment. To obtain both computation simplicity and representational compactness of a 3D spatial line, Plücker coordinates and orthonormal representation for the line are employed. To tightly and efficiently fuse the information from inertial measurement units (IMUs) and visual sensors, we optimize the states by minimizing a cost function which combines the pre-integrated IMU error term together with the point and line re-projection error terms in a sliding window optimization framework. The experiments evaluated on public datasets demonstrate that the PL-VIO method that combines point and line features outperforms several state-of-the-art VIO systems which use point features only.

[1]  Wang Jie,et al.  RBF-Based Monocular Vision Navigation for Small Vehicles in Narrow Space below Maize Canopy , 2016 .

[2]  Francesc Moreno-Noguer,et al.  PL-SLAM: Real-time monocular visual SLAM with points and lines , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[3]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[4]  Thomas B. Schön,et al.  Using Inertial Sensors for Position and Orientation Estimation , 2017, Found. Trends Signal Process..

[5]  Frank Dellaert,et al.  On-Manifold Preintegration for Real-Time Visual--Inertial Odometry , 2015, IEEE Transactions on Robotics.

[6]  Yi Liu,et al.  Monocular Visual-Inertial SLAM: Continuous Preintegration and Reliable Initialization , 2017, Sensors.

[7]  Wolfgang Hess,et al.  Real-time loop closure in 2D LIDAR SLAM , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[8]  Agus Budiyono,et al.  Principles of GNSS, Inertial, and Multi-sensor Integrated Navigation Systems , 2012 .

[9]  Gary R. Bradski,et al.  Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library , 2016 .

[10]  Il Hong Suh,et al.  Building a 3-D Line-Based Map Using Stereo SLAM , 2015, IEEE Transactions on Robotics.

[11]  Roland Siegwart,et al.  Unified temporal and spatial calibration for multi-sensor systems , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Luis Payá,et al.  Robust Visual Localization with Dynamic Uncertainty Management in Omnidirectional SLAM , 2017 .

[13]  Shaojie Shen,et al.  VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator , 2017, IEEE Transactions on Robotics.

[14]  Stergios I. Roumeliotis,et al.  A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[15]  Roland Siegwart,et al.  Deterministic initialization of metric state estimation filters for loosely-coupled monocular vision-inertial systems , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  Roland Siegwart,et al.  Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback , 2017, Int. J. Robotics Res..

[17]  Roland Siegwart,et al.  Robust visual inertial odometry using a direct EKF-based approach , 2015, IROS 2015.

[18]  Michael Bosse,et al.  Keyframe-based visual–inertial odometry using nonlinear optimization , 2015, Int. J. Robotics Res..

[19]  Adrien Bartoli,et al.  The 3D Line Motion Matrix and Alignment of Line Reconstructions , 2004, International Journal of Computer Vision.

[20]  Dongbing Gu,et al.  A review of visual inertial odometry from filtering and optimisation perspectives , 2015, Adv. Robotics.

[21]  Yan Lu,et al.  High level landmark-based visual navigation using unsupervised geometric constraints in local bundle adjustment , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[22]  Roland Siegwart,et al.  The EuRoC micro aerial vehicle datasets , 2016, Int. J. Robotics Res..

[23]  Vijay Kumar,et al.  Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[24]  Kostas Daniilidis,et al.  PennCOSYVIO: A challenging Visual Inertial Odometry benchmark , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[25]  Raúl Borraz,et al.  Cloud Incubator Car: A Reliable Platform for Autonomous Driving , 2018 .

[26]  Juan D. Tardós,et al.  Visual-Inertial Monocular SLAM With Map Reuse , 2016, IEEE Robotics and Automation Letters.

[27]  Reinhard Koch,et al.  An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency , 2013, J. Vis. Commun. Image Represent..

[28]  Javier Gonzalez-Jimenez,et al.  PL-SLAM: A Stereo SLAM System Through the Combination of Points and Line Segments , 2017, IEEE Transactions on Robotics.

[29]  Rafael Grompone von Gioi,et al.  LSD: A Fast Line Segment Detector with a False Detection Control , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[30]  Xun Wang,et al.  Detecting glass in Simultaneous Localisation and Mapping , 2017, Robotics Auton. Syst..

[31]  Wolfram Burgard,et al.  A benchmark for the evaluation of RGB-D SLAM systems , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[32]  Juan D. Tardós,et al.  ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras , 2016, IEEE Transactions on Robotics.

[33]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[34]  Frank Dellaert,et al.  iSAM2: Incremental smoothing and mapping using the Bayes tree , 2012, Int. J. Robotics Res..

[35]  John Weston,et al.  Strapdown Inertial Navigation Technology , 1997 .

[36]  Shaojie Shen,et al.  Monocular Visual–Inertial State Estimation With Online Initialization and Camera–IMU Extrinsic Calibration , 2017, IEEE Transactions on Automation Science and Engineering.

[37]  Wenqi Wu,et al.  Tightly-Coupled Stereo Visual-Inertial Navigation Using Point and Line Features , 2015, Sensors.

[38]  John J. Leonard,et al.  Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age , 2016, IEEE Transactions on Robotics.

[39]  Tom Drummond,et al.  Faster and Better: A Machine Learning Approach to Corner Detection , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[40]  Tightly-coupled Visual-Inertial Sensor Fusion based on IMU Pre-Integration , 2016 .

[41]  Stefano Soatto,et al.  Visual-inertial navigation, mapping and localization: A scalable real-time causal approach , 2011, Int. J. Robotics Res..

[42]  Salah Sukkarieh,et al.  Visual-Inertial-Aided Navigation for High-Dynamic Motion in Built Environments Without Initial Conditions , 2012, IEEE Transactions on Robotics.

[43]  Roland Siegwart,et al.  Real-time metric state estimation for modular vision-inertial systems , 2011, 2011 IEEE International Conference on Robotics and Automation.

[44]  Mariano Moran,et al.  Field Navigation Using Fuzzy Elevation Maps Built with Local 3D Laser Scans , 2018 .

[45]  Yong Liu,et al.  Robust visual SLAM with point and line features , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[46]  Daniel Cremers,et al.  Direct Sparse Odometry , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[47]  Dimitrios G. Kottas,et al.  Efficient and consistent vision-aided inertial navigation using line observations , 2013, 2013 IEEE International Conference on Robotics and Automation.

[48]  Joan Solà,et al.  Quaternion kinematics for the error-state Kalman filter , 2015, ArXiv.