Using Visual Routines to Drive in a Virtual Environment

Abstract This paper describes the use of visual routines for autonomous driving. The routines are developed on a platform that uses a pipeline video processor to analyze the images representing the view from a car driving in a virtual environment. The simulator can also be used for human subjects, whose eye movements can be tracked within a freely moving virtual reality helmet.

[1]  Rajesh P. N. Rao,et al.  Object indexing using an iconic sparse distributed memory , 1995, Proceedings of IEEE International Conference on Computer Vision.

[2]  Steven A. Shafer,et al.  Selective Perception for Robot Driving , 1993, AAAI.

[3]  Edward H. Adelson,et al.  The Design and Use of Steerable Filters , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Dean A. Pomerleau,et al.  RALPH: rapidly adapting lateral position handler , 1995, Proceedings of the Intelligent Vehicles '95. Symposium.

[5]  S. Ullman Visual routines , 1984, Cognition.

[6]  David N. Lee,et al.  Where we look when we steer , 1994, Nature.

[7]  Dana H. Ballard,et al.  Visual routines for autonomous driving , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[8]  Ernst D. Dickmanns,et al.  An integrated spatio-temporal approach to automatic visual guidance of autonomous vehicles , 1990, IEEE Trans. Syst. Man Cybern..

[9]  Julio Rosenblatt,et al.  DAMN: a distributed architecture for mobile navigation , 1997, J. Exp. Theor. Artif. Intell..