Substantial improvement of stereo visual odometry by multi-path feature tracking

Visual odometry (VO) studies the recovery of a camera trajectory from an image sequence. Stereo vision-based VO techniques solve the egomotion estimation problem by means of disparity-derived 3D scene structure. Typically, one of the two images is only used for disparity computation. In this paper we develop a generic feature tracking framework extending the classical VO problem into a higher dimension, where the image data of both cameras are fully used. Six tracking topologies proposed in literature, namely linear, lookahead, stereo linear, parallel, circular and cross-eye, are reviewed and evaluated. Based on the experimental results, we found benefits of taking right images into account through the feature tracking process, over the typical stereo VO implementation. The stereo-parallel configuration, which independently maintains feature tracking on each camera and have the tracked features integrated via a left-right matching, has achieved the most significant improvement of 30% over the conventional linear configuration.

[1]  David Nister,et al.  Bundle Adjustment Rules , 2006 .

[2]  Andrew Zisserman,et al.  Multiple View Geometry in Computer Vision (2nd ed) , 2003 .

[3]  Haokun Geng,et al.  Multi-frame Feature Integration for Multi-camera Visual Odometry , 2015, PSIVT.

[4]  Clark F. Olson,et al.  Robust stereo ego-motion for long distance navigation , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[5]  V. Lepetit,et al.  EPnP: An Accurate O(n) Solution to the PnP Problem , 2009, International Journal of Computer Vision.

[6]  Reinhard Klette,et al.  Regularised Energy Model for Robust Monocular Ego-motion Estimation , 2017, VISIGRAPP.

[7]  Simon Baker,et al.  Lucas-Kanade 20 Years On: A Unifying Framework , 2004, International Journal of Computer Vision.

[8]  Julius Ziegler,et al.  StereoScan: Dense 3d reconstruction in real-time , 2011, 2011 IEEE Intelligent Vehicles Symposium (IV).

[9]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[10]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[11]  Won S. Kim,et al.  Performance Analysis and Validation of a Stereo Vision System , 2005, 2005 IEEE International Conference on Systems, Man and Cybernetics.

[12]  Hao Zhong,et al.  Egomotion Estimation Using Binocular Spatiotemporal Oriented Energy , 2013, BMVC.

[13]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[14]  Xiangmo Zhao,et al.  An Improved Method of Vehicle Ego-motion Estimation Based on Stereo Vision , 2017 .

[15]  Reinhard Klette,et al.  Concise Computer Vision: An Introduction into Theory and Algorithms , 2014 .

[16]  Kenneth Levenberg A METHOD FOR THE SOLUTION OF CERTAIN NON – LINEAR PROBLEMS IN LEAST SQUARES , 1944 .

[17]  Juan D. Tardós,et al.  ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras , 2016, IEEE Transactions on Robotics.

[18]  Andreas Geiger,et al.  Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme , 2010, 2010 IEEE Intelligent Vehicles Symposium.

[19]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[20]  Yingping Huang,et al.  A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera , 2016, Sensors.

[21]  Patrick Rives,et al.  Real-time Quadrifocal Visual Odometry , 2010, Int. J. Robotics Res..

[22]  Larry H. Matthies,et al.  Two years of Visual Odometry on the Mars Exploration Rovers , 2007, J. Field Robotics.

[23]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[24]  Akihiro Yamamoto,et al.  Visual Odometry by Multi-frame Feature Integration , 2013, 2013 IEEE International Conference on Computer Vision Workshops.