The Navlab project, which seeks to build an autonomous robot that can operate in a realistic environment with bad weather, bad lighting, and bad or changing roads, is discussed. The perception techniques developed for the Navlab include road-following techniques using color classification and neural nets. These are discussed with reference to three road-following systems, SCARF, YARF, and ALVINN. Three-dimensional perception using three types of terrain representation (obstacle maps, terrain feature maps, and high-resolution maps) is examined. It is noted that perception continues to be an obstacle in developing autonomous vehicles. This work is part of the Defense Advanced Research Project Agency. Strategic Computing Initiative.<<ETX>>
[1]
Ernst D. Dickmanns,et al.
Distributed Scene Analysis For Autonomous Road Vehicle Guidance
,
1987,
Other Conferences.
[2]
Charles E. Thorpe,et al.
Color Vision For Road Following
,
1989,
Other Conferences.
[3]
Dean A. Pomerleau,et al.
Neural Network Based Autonomous Navigation
,
1990
.
[4]
Anthony Stentz,et al.
Multiresolution Constraint Modeling For Mobile Robot Planning
,
1990,
Other Conferences.
[5]
I. Kweon.
Modeling rugged terrain by mobile robots with multiple sensors
,
1991
.