Autonomous finding of landmarks for guiding long-distance navigation by a mobile is explored. In a trial, the robot continuously views and memorizes scenes along the route. When the same route is subsequently pursued again, the robot locates and orients itself based on the memorized scene. Since the stream of images is highly redundant, it is transformed into an intermediate 2/sup 1///sub 2/D representation, called the panoramic representation (PR), with a much smaller amount of data. Although the PR can be used for guidance of the autonomous navigation, it still contains a huge amount of data for a very long route. A human memorizes only very impressive objects along the route and uses them as landmarks. The robot also finds distinctive objects along the route and memorizes their features and spatial relationships. 3D objects are segmented in the PR by fusing range estimates and color attributes, and then a structure map representing their arrangement in space is obtained. In order to find distinctive objects for use as landmarks, the spatial relationships, shapes, and color attributes of the objects are examined.<<ETX>>
[1]
Saburo Tsuji,et al.
Panoramic representation of scenes for route understanding
,
1990,
[1990] Proceedings. 10th International Conference on Pattern Recognition.
[2]
Matthew J. Barth,et al.
Autonomous landmark selection for route recognition by a mobile robot
,
1991,
Proceedings. 1991 IEEE International Conference on Robotics and Automation.
[3]
Saburo Tsuji,et al.
From anorthoscope perception to dynamic vision
,
1990,
Proceedings., IEEE International Conference on Robotics and Automation.
[4]
Hiroshi Ishiguro,et al.
Omni-directional stereo for making global map
,
1990,
[1990] Proceedings Third International Conference on Computer Vision.