Homography based visual odometry with known vertical direction and weak Manhattan world assumption

In this paper we present a novel visual odometry pipeline, that exploits the weak Manhattan world assumption and known vertical direction. A novel 2pt and 2.5pt method for computing the essential matrix under the Manhattan world assumption and known vertical direction is presented that improves the efficiency of relative motion estimation in the visual odometry pipeline. Similarly an efficient 2pt algorithm for absolute camera pose estimation from 3D-2D correspondences is presented that speeds up the visual odometry pipeline as well. We show that the weak Manhattan world assumption and known vertical allow for direct relative scale estimation, without recovering the 3D structure. We evaluate our algorithms on synthetic data and show their application on real data sets from camera phones and robotic micro aerial vehicles. Our experiments show that the weak Manhattan world assumption holds for many real-world scenarios.

[1]  Roland Siegwart,et al.  Robust Real-Time Visual Odometry with a Single Camera and an IMU , 2011, BMVC.

[2]  Zuzana Kukelova,et al.  Closed-Form Solutions to Minimal Absolute Pose Problems with Known Vertical Direction , 2010, ACCV.

[3]  Marc Pollefeys,et al.  A Minimal Case Solution to the Calibrated Relative Pose Problem for the Case of Two Known Orientation Angles , 2010, ECCV.

[4]  Josechu J. Guerrero,et al.  Multiple homographies with omnidirectional vision for robot homing , 2010, Robotics Auton. Syst..

[5]  Richard Szeliski,et al.  Reconstructing building interiors from images , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[6]  T. Kanade,et al.  Geometric reasoning for single image structure recovery , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  V. Lepetit,et al.  EPnP: An Accurate O(n) Solution to the PnP Problem , 2009, International Journal of Computer Vision.

[8]  E. Malis,et al.  Deeper understanding of the homography decomposition for vision-based control , 2007 .

[9]  David Nistér,et al.  An efficient solution to the five-point relative pose problem , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  Jorge Dias,et al.  Vision and Inertial Sensor Cooperation Using Gravity as a Vertical Reference , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  J. M. M. Montiel,et al.  Indoor robot motion based on monocular images , 2001, Robotica.

[12]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[13]  Alan L. Yuille,et al.  Manhattan World: compass direction from a single image by Bayesian inference , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[14]  Thierry Viéville,et al.  Computation of ego-motion and structure from visual and inertial sensors using the vertical cue , 1993, 1993 (4th) International Conference on Computer Vision.