Visual Odometry Algorithm Using an RGB-D Sensor and IMU in a Highly Dynamic Environment

This paper proposes a robust visual odometry algorithm using a Kinect-style RGB-D sensor and inertial measurement unit (IMU) in a highly dynamic environment. Based on SURF (Speed Up Robust Features) descriptor, the proposed algorithm generates 3-D feature points incorporating depth information into RGB color information. By using an IMU, the generated 3-D feature points are rotated in order to have the same rigid body rotation component between two consecutive images. Before calculating the rigid body transformation matrix between the successive images from the RGB-D sensor, the generated 3-D feature points are filtered into dynamic or static feature points using motion vectors. Using the static feature points, the rigid body transformation matrix is finally computed by RANSAC (RANdom SAmple Consensus) algorithm. The experiments demonstrate that visual odometry is successfully obtained for a subject and a mobile robot by the proposed algorithm in a highly dynamic environment. The comparative study between proposed method and conventional visual odometry algorithm clearly show the reliability of the approach for computing visual odometry in a highly dynamic environment.

[1]  Oussama Khatib,et al.  Experimental Robotics IV, The 4th International Symposium, Stanford, California, USA, June 30 - July 2, 1995 , 1997, ISER.

[2]  Kurt Konolige,et al.  Large-Scale Visual Odometry for Rough Terrain , 2007, ISRR.

[3]  Wolfram Burgard,et al.  Towards a navigation system for autonomous indoor flying , 2009, 2009 IEEE International Conference on Robotics and Automation.

[4]  Gérard G. Medioni,et al.  Object modelling by registration of multiple range images , 1992, Image Vis. Comput..

[5]  J. Challis A procedure for determining rigid body transformation parameters. , 1995, Journal of biomechanics.

[6]  Hyun Myung,et al.  Robot Intelligence Technology and Applications 2012 , 2013 .

[7]  Gamini Dissanayake,et al.  A robust RGB-D SLAM algorithm , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[8]  Wolfram Burgard,et al.  An evaluation of the RGB-D SLAM system , 2012, 2012 IEEE International Conference on Robotics and Automation.

[9]  Kyoung Mu Lee,et al.  CV-SLAM: a new ceiling vision-based SLAM technique , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  David Nistér,et al.  Preemptive RANSAC for live structure and motion estimation , 2005, Machine Vision and Applications.

[11]  Frank Dellaert,et al.  iSAM: Incremental Smoothing and Mapping , 2008, IEEE Transactions on Robotics.

[12]  Kostas Daniilidis,et al.  Monocular visual odometry in urban environments using an omnidirectional camera , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Jong-Hwan Kim,et al.  Fuzzy Integral-Based Gaze Control Architecture Incorporated With Modified-Univector Field-Based Navigation for Humanoid Robots , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[14]  Andrew W. Fitzgibbon,et al.  KinectFusion: Real-time dense surface mapping and tracking , 2011, 2011 10th IEEE International Symposium on Mixed and Augmented Reality.

[15]  Hyun Myung,et al.  Landmark-Based Particle Localization Algorithm for Mobile Robots With a Fish-Eye Vision System , 2013, IEEE/ASME Transactions on Mechatronics.

[16]  Albert S. Huang,et al.  Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera , 2011, ISRR.

[17]  Albert S. Huang,et al.  Visual Navigation for Micro Air Vehicles , 2010 .

[18]  Wolfram Burgard,et al.  Robotics: Science and Systems XV , 2010 .

[19]  Jong-Hwan Kim,et al.  Image-Based ICP Algorithm for Visual Odometry Using a RGB-D Sensor in a Dynamic Environment , 2012, RiTA.

[20]  Wolfram Burgard,et al.  G2o: A general framework for graph optimization , 2011, 2011 IEEE International Conference on Robotics and Automation.

[21]  Robert C. Bolles,et al.  Outdoor Mapping and Navigation Using Stereo Vision , 2006, ISER.

[22]  Olivier Stasse,et al.  MonoSLAM: Real-Time Single Camera SLAM , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[23]  Roland Siegwart,et al.  Robust Real-Time Visual Odometry with a Single Camera and an IMU , 2011, BMVC.

[24]  Damiano Verda,et al.  Human navigation and mapping with a 6DOF IMU and a laser scanner , 2011, Robotics Auton. Syst..

[25]  Paul Newman,et al.  Detecting Loop Closure with Scene Sequences , 2007, International Journal of Computer Vision.

[26]  Jong-Hwan Kim,et al.  Visual Loop-Closure Detection Method Using Average Feature Descriptors , 2013, RiTA.

[27]  Frank Dellaert,et al.  iSAM2: Incremental smoothing and mapping with fluid relinearization and incremental variable reordering , 2011, 2011 IEEE International Conference on Robotics and Automation.

[28]  Stergios I. Roumeliotis,et al.  Vision-Aided Inertial Navigation for Precise Planetary Landing: Analysis and Experiments , 2007, Robotics: Science and Systems.

[29]  Dieter Fox,et al.  RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments , 2012, Int. J. Robotics Res..

[30]  Daniel Cremers,et al.  Real-time visual odometry from dense RGB-D images , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[31]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..