RGB-D Inertial Odometry for Indoor Robot via Keyframe-based Nonlinear Optimization

In recent years, Visual Inertial Odometry(VIO) has been an attractive topic in robotics area. VIO aim to estimate robots trajectory via analyzing measurements obtained from visual sensors and inertial sensors. There have been several outstanding results in these topics, however, these approaches were suffering from the longtime initialization and lack the robustness. In this paper, we present a novel tightly-coupled method which promotes accuracy and robustness in pose estimation with fusing image and depth information from the RGB-D camera and the measurement from the inertial sensor. Instead of using a filter-based method, we put visual and inertial data into a nonlinear optimization framework. In our work, a robust optimization-based method is used to obtain high accuracy estimator initialization. In order to maintain the balance of accuracy with computational complexity, we use a sliding-window optimizer to optimize the keyframes pose graph. We also propose an implement of 3D reconstruction. We test our algorithm on the Automated Guided Vehicle (AGV) platform, and we open source our implementations for PCs.

[1]  Michael Bosse,et al.  Keyframe-based visual–inertial odometry using nonlinear optimization , 2015, Int. J. Robotics Res..

[2]  Vijay Kumar,et al.  Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[3]  Daniel Cremers,et al.  Robust odometry estimation for RGB-D cameras , 2013, 2013 IEEE International Conference on Robotics and Automation.

[4]  Roland Siegwart,et al.  Robust visual inertial odometry using a direct EKF-based approach , 2015, IROS 2015.

[5]  Michel Dhome,et al.  Generic and real-time structure from motion using local bundle adjustment , 2009, Image Vis. Comput..

[6]  Stergios I. Roumeliotis,et al.  A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[7]  Frank Dellaert,et al.  On-Manifold Preintegration for Real-Time Visual--Inertial Odometry , 2015, IEEE Transactions on Robotics.

[8]  Shaojie Shen,et al.  VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator , 2017, IEEE Transactions on Robotics.

[9]  Luigi di Stefano,et al.  Fusion of Inertial and Visual Measurements for RGB-D SLAM on Mobile Devices , 2015, 2015 IEEE International Conference on Computer Vision Workshop (ICCVW).

[10]  Juan D. Tardós,et al.  Visual-Inertial Monocular SLAM With Map Reuse , 2016, IEEE Robotics and Automation Letters.

[11]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.

[12]  Jongwoo Lim,et al.  Visual inertial odometry using coupled nonlinear optimization , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[13]  Wolfram Burgard,et al.  A benchmark for the evaluation of RGB-D SLAM systems , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Juan D. Tardós,et al.  ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras , 2016, IEEE Transactions on Robotics.

[15]  Usman Qayyum,et al.  IMU aided RGB-D SLAM , 2017, 2017 14th International Bhurban Conference on Applied Sciences and Technology (IBCAST).

[16]  Stefan Leutenegger,et al.  Dense RGB-D-inertial SLAM with map deformations , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[17]  Andrew W. Fitzgibbon,et al.  Bundle Adjustment - A Modern Synthesis , 1999, Workshop on Vision Algorithms.

[18]  Dongbing Gu,et al.  A review of visual inertial odometry from filtering and optimisation perspectives , 2015, Adv. Robotics.

[19]  Camillo J. Taylor,et al.  Camera trajectory estimation using inertial sensor measurements and structure from motion results , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[20]  Frank Dellaert,et al.  iSAM2: Incremental smoothing and mapping using the Bayes tree , 2012, Int. J. Robotics Res..