Presents a method for solving the problem of mobile robot localisation in an indoor environment. It is based on an approach in which the scene is "learned" by taking images from a large number of positions (x, y) and orientations (/spl theta/) along a 2D grid. The positioning problem is then reduced to a problem of associating an unknown input image with a "learned" image. A brute force solution requires too many image correlations, and would be computationaly too expensive to provide a system running in real-time. To overcome this problem, the image set is compressed using principal components analysis, converting the search problem into an addressing problem. The aim is to obtain a low dimensional subspace, in which the visual workspace is represented as a continuous appearance manifold. This method allows a process to determine the current robot pose by projecting an unknown input image into the eigenspace, and comparing its exact position to the appearance manifold.
[1]
Henrik I. Christensen,et al.
Model-driven vision for in-door navigation
,
1994,
Robotics Auton. Syst..
[2]
James L. Crowley,et al.
Mathematical Foundations of Navigation and Perception for an Autonomous Mobile Robot
,
1995,
Reasoning with Uncertainty in Robotics.
[3]
M. Turk,et al.
Eigenfaces for Recognition
,
1991,
Journal of Cognitive Neuroscience.
[4]
James L. Crowley,et al.
World modeling and position estimation for a mobile robot using ultrasonic ranging
,
1989,
Proceedings, 1989 International Conference on Robotics and Automation.
[5]
Hugh F. Durrant-Whyte,et al.
Mobile robot localization by tracking geometric beacons
,
1991,
IEEE Trans. Robotics Autom..
[6]
James L. Crowley,et al.
Visual Recognition Using Local Appearance
,
1998,
ECCV.