Accelerated Visual Inertial Navigation via Fragmented Structure Updates

Tightly coupled Visual-Inertial Navigation System (VINS) implementations have proven their superiority due to their ability to jointly optimize all state variables. However, this joint optimization is considered as a computational bottleneck within the system, and thus many traditional VINS can only be implemented on platforms containing powerful processors. In this work, we show a significant reduction in the computational burden of optimization can be achieved through the use of fragments; a set of co-visible feature points from across several different cameras efficiently split, categorized into groups, and then analyzed using a machine-learning inspired frequent-pattern growth algorithm. Furthermore, we use a reduced continuous representation during preintegration for better accuracy while requiring less computational resources. We validate our algorithm in datasets, showing that the derivation is not only more accurate, but requires significantly less computational resources. When tested on a Raspberry Pi, our implementation was able to track the system’s state in nearly 20 frames per second using only a single CPU core. Testing on another low power module shows a power draw of around 800 mW. Due to run-time considerations the Raspberry Pi was only competitive in regards to other optimization methodologies, but our algorithm displays remarkable accuracy on a more powerful platform.

[1]  Gamini Dissanayake,et al.  An invariant-EKF VINS algorithm for improving consistency , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[2]  Michael Kaess,et al.  Information Sparsification in Visual-Inertial Odometry , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[3]  Viorela Ila,et al.  Fast Incremental Bundle Adjustment with Covariance Recovery , 2017, 2017 International Conference on 3D Vision (3DV).

[4]  Ashutosh Gupta,et al.  Improvised Apriori Algorithm using frequent pattern tree for real time applications in data mining , 2014, ArXiv.

[5]  Roland Siegwart,et al.  Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments , 2012, 2012 IEEE International Conference on Robotics and Automation.

[6]  Richard Szeliski,et al.  Bundle Adjustment in the Large , 2010, ECCV.

[7]  Anastasios I. Mourikis,et al.  High-precision, consistent EKF-based visual-inertial odometry , 2013, Int. J. Robotics Res..

[8]  Roland Siegwart,et al.  Robust visual inertial odometry using a direct EKF-based approach , 2015, IROS 2015.

[9]  Gamini Dissanayake,et al.  Convergence and Consistency Analysis for Extended Kalman Filter Based SLAM , 2007, IEEE Transactions on Robotics.

[10]  Thomas B. Schön,et al.  Using Inertial Sensors for Position and Orientation Estimation , 2017, Found. Trends Signal Process..

[11]  Frank Dellaert,et al.  On-Manifold Preintegration for Real-Time Visual--Inertial Odometry , 2015, IEEE Transactions on Robotics.

[12]  Frank Dellaert,et al.  iSAM: Incremental Smoothing and Mapping , 2008, IEEE Transactions on Robotics.

[13]  Vijay Kumar,et al.  Visual-inertial direct SLAM , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[14]  Dimitrios G. Kottas,et al.  Camera-IMU-based localization: Observability analysis and consistency improvement , 2014, Int. J. Robotics Res..

[15]  Jiawei Han,et al.  Frequent pattern mining: current status and future directions , 2007, Data Mining and Knowledge Discovery.

[16]  Ian H. Witten,et al.  Data mining: practical machine learning tools and techniques, 3rd Edition , 1999 .

[17]  Javier Civera,et al.  Unified Inverse Depth Parametrization for Monocular SLAM , 2006, Robotics: Science and Systems.

[18]  Hujun Bao,et al.  ICE-BA: Incremental, Consistent and Efficient Bundle Adjustment for Visual-Inertial SLAM , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[19]  Kostas Daniilidis,et al.  Event-Based Visual Inertial Odometry , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  Michael Bosse,et al.  Keyframe-based visual–inertial odometry using nonlinear optimization , 2015, Int. J. Robotics Res..

[21]  Roland Siegwart,et al.  The EuRoC micro aerial vehicle datasets , 2016, Int. J. Robotics Res..

[22]  Guoquan Huang,et al.  Continuous Preintegration Theory for Graph-based Visual-Inertial Navigation , 2018, ArXiv.

[23]  Roland Siegwart,et al.  A robust and modular multi-sensor fusion approach applied to MAV navigation , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[24]  Frank Dellaert,et al.  Mining Structure Fragments for Smart Bundle Adjustment , 2014, BMVC.

[25]  Frank Dellaert,et al.  Incremental smoothing and mapping , 2008 .

[26]  Frank Dellaert,et al.  Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing , 2006, Int. J. Robotics Res..

[27]  Frank Dellaert,et al.  iSAM2: Incremental smoothing and mapping using the Bayes tree , 2012, Int. J. Robotics Res..

[28]  Vijay Kumar,et al.  Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight , 2017, IEEE Robotics and Automation Letters.