GVINS: Tightly Coupled GNSS–Visual–Inertial Fusion for Smooth and Consistent State Estimation

Visual-Inertial odometry (VIO) is known to suffer from drifting especially over long-term runs. In this paper, we present GVINS, a non-linear optimization based system that tightly fuses GNSS raw measurements with visual and inertial information for real-time and drift-free state estimation. Our system aims to provide accurate global 6-DoF estimation under complex indoor-outdoor environment where GNSS signals may be intermittent or even totally unavailable. To connect global measurements with local states, a coarse-to-fine initialization procedure is proposed to efficiently calibrate the transformation online and initialize GNSS states from only a short window of measurements. The GNSS code pseudorange and Doppler shift measurements, along with visual and inertial information, are then modelled and used to constrain the system states in a factor graph framework. For complex and GNSS-unfriendly areas, the degenerate cases are discussed and carefully handled to ensure robustness. Thanks to the tightly-coupled multi-sensor approach and system design, our system fully exploits the merits of three types of sensors and is capable to seamlessly cope with the transition between indoor and outdoor environments, where satellites are lost and reacquired. We extensively evaluate the proposed system by both simulation and real-world experiments, and the result demonstrates that our system substantially eliminates the drift of VIO and preserves the local accuracy in spite of noisy GNSS measurements. The challenging indoor-outdoor and urban driving experiments verify the availability and robustness of GVINS in complex environments. In addition, experiments also show that our system can gain from even a single satellite while conventional GNSS algorithms need four at least.

[1]  Todd E. Humphreys,et al.  High-precision globally-referenced position and attitude via a fusion of visual SLAM, carrier-phase-based GPS, and inertial measurements , 2014, 2014 IEEE/ION Position, Location and Navigation Symposium - PLANS 2014.

[2]  Shaojie Shen,et al.  A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors , 2019, ArXiv.

[3]  Todd E. Humphreys,et al.  Multi-Antenna Vision-and-Inertial-Aided CDGNSS for Micro Aerial Vehicle Pose Estimation , 2020 .

[4]  Michael J. Rycroft,et al.  Understanding GPS. Principles and Applications , 1997 .

[5]  Frank Dellaert,et al.  On-Manifold Preintegration for Real-Time Visual--Inertial Odometry , 2015, IEEE Transactions on Robotics.

[6]  Vijay Kumar,et al.  Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft MAV , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Brendan J. Frey,et al.  Factor graphs and the sum-product algorithm , 2001, IEEE Trans. Inf. Theory.

[8]  Fei Guo,et al.  Real-time clock jump compensation for precise point positioning , 2013, GPS Solutions.

[9]  Roland Siegwart,et al.  A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM , 2014, ICRA 2014.

[10]  J. Zumberge,et al.  Precise point positioning for the efficient and robust analysis of GPS data from large networks , 1997 .

[11]  Stergios I. Roumeliotis,et al.  A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[12]  CarloneLuca,et al.  On-Manifold Preintegration for Real-Time Visual--Inertial Odometry , 2017 .

[13]  Yang Yu,et al.  A GPS-aided Omnidirectional Visual-Inertial State Estimator in Ubiquitous Environments , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[14]  J. Saastamoinen Contributions to the theory of atmospheric refraction , 1972 .

[15]  Stergios I. Roumeliotis,et al.  A Square Root Inverse Filter for Efficient Vision-aided Inertial Navigation on Mobile Devices , 2015, Robotics: Science and Systems.

[16]  Geoffrey Blewitt,et al.  An Automatic Editing Algorithm for GPS data , 1990 .

[17]  Vijay Kumar,et al.  Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[18]  Xiaoji Niu,et al.  Tight Fusion of a Monocular Camera, MEMS-IMU, and Single-Frequency Multi-GNSS RTK for Precise Navigation in GNSS-Challenged Environments , 2019, Remote. Sens..

[19]  Marc Pollefeys,et al.  CamOdoCal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[20]  Jinxu Liu,et al.  Optimization-Based Visual-Inertial SLAM Tightly Coupled with Raw GNSS Measurements , 2020, 2021 IEEE International Conference on Robotics and Automation (ICRA).

[21]  Tomoji Takasu,et al.  Development of the low-cost RTK-GPS receiver with an open source program package RTKLIB , 2009 .

[22]  Markus Schreiber,et al.  Vehicle localization with tightly coupled GNSS and visual odometry , 2016, 2016 IEEE Intelligent Vehicles Symposium (IV).

[23]  Shaojie Shen,et al.  VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator , 2017, IEEE Transactions on Robotics.

[24]  Roland Siegwart,et al.  GOMSF: Graph-Optimization Based Multi-Sensor Fusion for robust UAV Pose estimation , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[25]  Andrey Soloviev,et al.  Integration of GPS and vision measurements for navigation in GPS challenged environments , 2010, IEEE/ION Position, Location and Navigation Symposium.

[26]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[27]  Roland Siegwart,et al.  A robust and modular multi-sensor fusion approach applied to MAV navigation , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[28]  J. Klobuchar Ionospheric Time-Delay Algorithm for Single-Frequency GPS Users , 1987, IEEE Transactions on Aerospace and Electronic Systems.

[29]  Michael Bosse,et al.  Keyframe-based visual–inertial odometry using nonlinear optimization , 2015, Int. J. Robotics Res..

[30]  Xingxing Li,et al.  Semi-tightly coupled integration of multi-GNSS PPP and S-VINS for precise positioning in GNSS-challenged environments , 2021, Satellite Navigation.

[31]  Vincenzo Rosario Baraniello,et al.  UAV position and attitude estimation using IMU, GNSS and camera , 2012, 2012 15th International Conference on Information Fusion.

[32]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[33]  Kyle O'Keefe,et al.  Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas , 2018, Sensors.

[34]  Sangkyung Sung,et al.  GNSS integration with vision-based navigation for low GNSS visibility conditions , 2013, GPS Solutions.

[35]  Shaojie Shen,et al.  Online Temporal Calibration for Monocular Visual-Inertial Systems , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[36]  Shaojie Shen,et al.  Robust initialization of monocular visual-inertial estimation on aerial robots , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[37]  Neil Ashby,et al.  The Sagnac Effect in the Global Positioning System , 2004 .

[38]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[39]  Anastasios I. Mourikis,et al.  High-precision, consistent EKF-based visual-inertial odometry , 2013, Int. J. Robotics Res..