Learning mobile robot navigation: a behavior-based approach

We describe a mobile robot navigation behavior based on learning from the visual environment of a particular task. This approach enables repetitions of such a previously learned navigation task. Navigation is performed from stored iconic visual information as well as stored motor-information from a reference or learning-run. This representation is then used to large-scale guide the robot on subsequent independent runs. A system of competing behaviors takes care of near to medium distance navigation problems such as obstacle detection and avoidance, finding independently moving objects and determine potential "danger". A special feature of this system is the use of retinal images. The large-scale navigation behavior is based on a "vergence-like" mechanism that determines the direction of the disparity between the current visual structure and the stored model. This method is very simple and computationally fast. Motor information is used for comparison to give gross position estimation.<<ETX>>