Memory-based learning for visual odometry

We present and examine a technique for estimating the ego-motion of a mobile robot using memory-based learning and a monocular camera. Unlike other approaches that rely heavily on camera calibration and geometry to compute trajectory, our method learns a mapping from sparse optical flow to platform velocity and turn rate. We also demonstrate an efficient method of computing high-quality sparse optical flow, and techniques for using this sparse optical flow as input to a supervised learning method. We employ a voting scheme of many learners that use subsets of the sparse optical flow to cope with variable dimensionality and reduce the dimensionality of each learner. Finally, we perform experiments in which we examine the learned mapping for visual odometry, investigate the effects of varying the reduced dimensionality of the sparse optical flow state, and quantify the accuracy of two variations of our learner scheme. Our results indicate that our learning scheme estimates monocular visual odometry mainly from points on the ground plane, and reflect to a degree the minimum dimensionality imposed by the problem. In addition, we show that while this memory-based learning method cannot yet estimate ego-motion as accurately as recent geometric methods, it is possible to learn, with no explicit model of camera calibration or scene structure, complicated mappings that take advantage of properties of the camera and the environment.

[1]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[2]  George Adrian Horridge,et al.  What can engineers learn from insect vision? , 1992, Proceedings of IEEE Systems Man and Cybernetics Conference - SMC.

[3]  David J. Fleet,et al.  Performance of optical flow techniques , 1994, International Journal of Computer Vision.

[4]  Peter I. Corke,et al.  Omnidirectional visual odometry for a planetary rover , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[5]  Richard Szeliski,et al.  Visual odometry and map correlation , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[6]  N. Franceschini,et al.  From insect vision to robot vision , 1992 .

[7]  Wei Zou,et al.  Visual odometry based on locally planar ground assumption , 2005, 2005 IEEE International Conference on Information Acquisition.

[8]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[9]  Kurt Konolige,et al.  Real-time Localization in Outdoor Environments using Stereo Vision and Inexpensive GPS , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[10]  Yang Cheng,et al.  Path following using visual odometry for a Mars rover in high-slip environments , 2004, 2004 IEEE Aerospace Conference Proceedings (IEEE Cat. No.04TH8720).

[11]  Illah R. Nourbakhsh,et al.  A Robust Visual Odometry and Precipice Detection System Using Consumer-grade Monocular Vision , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[12]  Roland Siegwart,et al.  Stereo-Based Ego-Motion Estimation Using Pixel Tracking and Iterative Closest Point , 2006, Fourth IEEE International Conference on Computer Vision Systems (ICVS'06).

[13]  D. Ruppert The Elements of Statistical Learning: Data Mining, Inference, and Prediction , 2004 .

[14]  Saied Moezzi,et al.  Dynamic stereo vision , 1992 .

[15]  Christian Dornhege,et al.  Visual Odometry for Tracked Vehicles , 2006 .

[16]  Jon Louis Bentley,et al.  An Algorithm for Finding Best Matches in Logarithmic Expected Time , 1977, TOMS.

[17]  Frank Dellaert,et al.  Stereo Tracking and Three-Point/One-Point Algorithms - A Robust Approach in Visual Odometry , 2006, 2006 International Conference on Image Processing.