ELVIS (Eigenvectors for Land Vehicle Image System) is a road-following system designed to drive the CMU Navlabs. It is based on ALVINN, the neural network road-following system built by Dean Pomerleau at CMU. ELVIS is an attempt to more fully understand ALVINN and to determine whether it is possible to design a system that can rival ALVINN using the same input and output, but without using a neural network. Like ALVINN, ELVIS observes the road through a video camera and observes human steering response through encoders mounted on the steering column. After a few minutes of observing the human trainer, ELVIS can take control. ELVIS learns the eigenvectors of the image and steering training set via principal component analysis. These eigenvectors roughly correspond to the primary features of the image set and their correlations to steering. Road-following is then performed by projecting new images onto the previously calculated eigenspace. ELVIS architecture and experiments are discussed as well as implications for eigenvector-based systems and how they compare with neural network-based systems.
[1]
Dean A. Pomerleau,et al.
Knowledge-Based Training of Artificial Neural Networks for Autonomous Robot Driving
,
1993
.
[2]
M. Turk,et al.
Eigenfaces for Recognition
,
1991,
Journal of Cognitive Neuroscience.
[3]
Matthew Turk,et al.
VITS-A Vision System for Autonomous Land Vehicle Navigation
,
1988,
IEEE Trans. Pattern Anal. Mach. Intell..
[4]
J. Crisman.
Color vision for the detection of unstructured road and intersections
,
1990
.
[5]
Hiroshi Murase,et al.
Parametric eigenspace representation for visual learning and recognition
,
1993,
Optics & Photonics.
[6]
Shumeet Baluja,et al.
Massively parallel, adaptive, color image processing for autonomous road following
,
1994
.
[7]
Dana H. Ballard,et al.
Computer Vision
,
1982
.