Visual homing: Surfing on the epipoles

We introduce a novel method for visual homing. Using this method a robot can be sent to desired positions and orientations in 3-D space specified by single images taken from these positions. Our method determines the path of the robot on-line. The starting position of the robot is not constrained, and a 3-D model of the environment is not required. The method is based on recovering the epipolar geometry relating the current image taken by the robot and the target image. Using the epipolar geometry, most of the parameters which specify the differences in position and orientation of the camera between the two images are recovered. However, since not all of the parameters can be recovered from two images, we have developed specific methods to bypass these missing parameters and resolve the ambiguities that exist. We present two homing algorithms for two standard projection models, weak and full perspective. We have performed simulations and real experiments which demonstrate the robustness of the method and that the algorithms always converge to the target pose.

[1]  Thomas S. Huang,et al.  Finding point correspondences and determining motion of a rigid object from two weak perspective views , 1988, Proceedings CVPR '88: The Computer Society Conference on Computer Vision and Pattern Recognition.

[2]  Ehud Rivlin,et al.  Localization and Homing Using Combinations of Model Views , 1995, Artif. Intell..

[3]  R. Hartley In Defence of the &point Algorithm , 1995 .

[4]  Andrew Zisserman,et al.  Wide baseline stereo matching , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[5]  Gregory Dudek,et al.  Vision-based robot localization without explicit object models , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[6]  Edward M. Riseman,et al.  Image-based homing , 1992 .

[7]  Paul A. Beardsley,et al.  Active visual navigation using non-metric structure , 1995, Proceedings of IEEE International Conference on Computer Vision.

[8]  Jorge J. Moré,et al.  User Guide for Minpack-1 , 1980 .

[9]  Andrew W. Fitzgibbon,et al.  Maintaining multiple motion model hypotheses over many views to recover matching and structure , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[10]  Xilin Yi,et al.  Robust occluding contour detection using the Hausdorff distance , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[11]  Richard I. Hartley,et al.  In Defense of the Eight-Point Algorithm , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Ehud Rivlin,et al.  A Geometric Interpretation of Weak-Perspective Motion , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  L L Kontsevich,et al.  Pairwise comparison technique: a simple solution for depth reconstruction. , 1993, Journal of the Optical Society of America. A, Optics and image science.

[14]  Ian D. Reid,et al.  Saccade and pursuit on an active head/eye platform , 1994, Image Vis. Comput..

[15]  John K. Tsotsos,et al.  Active stereo vision and cyclotorsion , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[16]  Claude L. Fennema,et al.  Model-directed mobile robot navigation , 1990, IEEE Trans. Syst. Man Cybern..

[17]  Richard I. Hartley,et al.  In defence of the 8-point algorithm , 1995, Proceedings of IEEE International Conference on Computer Vision.

[18]  D. Zipser,et al.  Biologically plausible models of place recognition and goal location , 1986 .

[19]  Pradeep K. Khosla,et al.  Strategies for Increasing the Tracking Region of an Eye-in-Hand System by Singularity and Joint Limit Avoidance , 1993, [1993] Proceedings IEEE International Conference on Robotics and Automation.

[20]  Claus B. Madsen,et al.  A Viewpoint Planning Strategy for Determining True Angles on Polyhedral Objects by Camera Alignment , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[21]  S. Ullman The Interpretation of Visual Motion , 1979 .

[22]  H. C. Longuet-Higgins,et al.  A computer algorithm for reconstructing a scene from two projections , 1981, Nature.

[23]  Jan-Olof Eklundh,et al.  A head-eye system - Analysis and design , 1992, CVGIP Image Underst..

[24]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[25]  Thomas S. Huang,et al.  Finding point correspondences and determining motion of a rigid object from two weak perspective views , 1988, Proceedings CVPR '88: The Computer Society Conference on Computer Vision and Pattern Recognition.

[26]  Masayuki Inaba,et al.  Visual navigation using view-sequenced route representation , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[27]  Thomas S. Huang,et al.  Uniqueness and Estimation of Three-Dimensional Motion Parameters of Rigid Objects with Curved Surfaces , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Narendra Ahuja,et al.  Motion and Structure From Two Perspective Views: Algorithms, Error Analysis, and Error Estimation , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[29]  Peter I. Corke,et al.  A tutorial on visual servo control , 1996, IEEE Trans. Robotics Autom..

[30]  Patrick Rives,et al.  A new approach to visual servoing in robotics , 1992, IEEE Trans. Robotics Autom..

[31]  Daniel Mossé,et al.  Real-time active vision with fault tolerance , 1996, Proceedings of 13th International Conference on Pattern Recognition.

[32]  Thomas S. Huang,et al.  Motion and Structure from Orthographic Projections , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[33]  John K. Tsotsos,et al.  Active object recognition , 1992, Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.