We describe a mobile robot navigation behavior based on learning from the visual environment of a particular task. This approach enables repetitions of such a previously learned navigation task. Navigation is performed from stored iconic visual information as well as stored motor-information from a reference or learning-run. This representation is then used to large-scale guide the robot on subsequent independent runs. A system of competing behaviors takes care of near to medium distance navigation problems such as obstacle detection and avoidance, finding independently moving objects and determine potential "danger". A special feature of this system is the use of retinal images. The large-scale navigation behavior is based on a "vergence-like" mechanism that determines the direction of the disparity between the current visual structure and the stored model. This method is very simple and computationally fast. Motor information is used for comparison to give gross position estimation.<<ETX>>
[1]
Giulio Sandini,et al.
"Form-invariant" topological mapping strategy for 2D shape recognition
,
1985,
Comput. Vis. Graph. Image Process..
[2]
Yoram Yakimovsky,et al.
A system for extracting three-dimensional measurements from a stereo pair of TV cameras
,
1976
.
[3]
Giulio Sandini,et al.
Vision and Space-Variant Sensing
,
1992
.
[4]
M. Tistarelli,et al.
Direct estimation of time-to-impact from optical flow
,
1991,
Proceedings of the IEEE Workshop on Visual Motion.
[5]
Giulio Sandini,et al.
An anthropomorphic retina-like structure for scene analysis
,
1980
.
[6]
Giulio Sandini,et al.
On the Advantages of Polar and Log-Polar Mapping for Direct Estimation of Time-To-Impact from Optical Flow
,
1993,
IEEE Trans. Pattern Anal. Mach. Intell..