Omni-directional vision for robot navigation

We describe a method for visual based robot navigation with a single omni-directional (catadioptic) camera. We show how omni-directional images can be used to generate the representations needed for two main navigation modalities: Topological Navigation and Visual Path Following. Topological Navigation relies on the robot's qualitative global position, estimated from a set of omni-directional images obtained during a training stage (compressed using PCA). To deal with illumination changes, an eigenspace approximation to the Hausdorff measure is exploited. We present a method to transform omni-directional images to Bird's Eye Views that correspond to scaled orthographic views of the ground plane. These images are used to locally control the orientation of the robot, through visual servoing. Visual Path Following is used to accurately control the robot along a prescribed trajectory, by using bird's eye views to track landmarks on the ground plane. Due to the simplified geometry of these images, the robot's pose can be estimated easily and used for accurate trajectory following. Omni-directional images facilitate landmark based navigation, since landmarks remain visible in all images, as opposed to a small field-of-view standard camera. Also, omni-directional images provide the means of having adequate representations to support both accurate or qualitative navigation. Results are described in the paper.

[1]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[2]  D.J. Kriegman,et al.  Stereo vision and navigation in buildings for mobile robots , 1989, IEEE Trans. Robotics Autom..

[3]  S. Wehner,et al.  Insect navigation: use of maps or Ariadne's thread ? , 1990 .

[4]  Olivier Faugeras,et al.  Maintaining representations of the environment of a mobile robot , 1988, IEEE Trans. Robotics Autom..

[5]  Alex Pentland,et al.  Face recognition using eigenfaces , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[6]  Linda G. Shapiro,et al.  Computer and Robot Vision , 1991 .

[7]  Edward M. Riseman,et al.  Image-based homing , 1992 .

[8]  Hiroshi Murase,et al.  Learning and recognition of 3D objects from appearance , 1993, [1993] Proceedings IEEE Workshop on Qualitative Vision.

[9]  David Kortenkamp,et al.  Topological Mapping for Mobile Robots Using a Combination of Sonar and Vision Sensing , 1994, AAAI.

[10]  Carlos Canudas de Wit,et al.  NONLINEAR CONTROL DESIGN FOR MOBILE ROBOTS , 1994 .

[11]  Emanuele Trucco,et al.  Computer and Robot Vision , 1995 .

[12]  Akio Kosaka,et al.  Purdue Experiments in Model-Based Vision for Hallway Navigation , 1995 .

[13]  Hiroshi Ishiguro,et al.  Image-based memory of environment , 1996, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. IROS '96.

[14]  Jana Kosecka Visually Guided Navigation , 1996, Intelligent Robots.

[15]  William Rucklidge,et al.  Efficient Visual Recognition Using the Hausdorff Distance , 1996, Lecture Notes in Computer Science.

[16]  M. Srinivasan,et al.  Reflective surfaces for panoramic imaging. , 1997, Applied optics.

[17]  James L. Crowley,et al.  Appearance based process for visual navigation , 1997, Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97.

[18]  Yoshiaki Shirai,et al.  Active navigation vision based on eigenspace analysis , 1997, Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97.

[19]  Claude Pégard,et al.  A navigation system based on an ominidirectional vision sensor , 1997, Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97.

[20]  E. Niebur From living eyes to seeing machines, M.V. Srinivasan, S. Venkatesh. Oxford University Press (1997), ISBN 0 198 577 850 , 1997 .

[21]  James L. Crowley,et al.  Appearance based processes for visual navigation , 1997 .

[22]  Shree K. Nayar,et al.  A theory of catadioptric image formation , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[23]  Naokazu Yokoya,et al.  Memory-based self-localization using omnidirectional images , 1998, Proceedings. Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170).

[24]  José Santos-Victor,et al.  Topological Maps for Visual Navigation , 1999, ICVS.

[25]  Václav Hlavác,et al.  Zero Phase Representation of Panoramic Images for Image Vased Localization , 1999, CAIP.

[26]  José Gaspar,et al.  Visual Path Following with a Catadioptric Panoramic Camera , 1999 .

[27]  Clark F. Olson,et al.  View-Based Recognition Using an Eigenspace Approximation to the Hausdorff Measure , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[28]  José Santos-Victor,et al.  Omni-directional Visual Navigation , 1999 .

[29]  James L. Crowley,et al.  Local appearance space for recognition of navigation landmarks , 2000, Robotics Auton. Syst..

[30]  Ruzena Bajcsy,et al.  Reflective surfaces as computational sensors , 2001, Image Vis. Comput..