The YARF system for vision-based road following

The YARF system extracts road features from color images in order to drive a vehicle along the road. Central to the system is a model of the road which includes both geometric information about the relative placement of the features defining the lane structure of the road and information about the appearance of those features. This model is used to selectively apply specialized image segmentation methods to perform the feature detection; to provide the system of constraints which relate the feature positions to the vehicle position on the road; and to provide a context of expectations about the appearance of the road which can be used to analyze situations where expected features are not seen. Experimental results are presented showing the benefit of using multiple specialized image segmentation techniques, the advantages of using Least Median Squares estimation in situations where there are outliers in the data, and the ability of YARF to detect intersections and changes in lane structure based on the failure to see expected features in the image.

[1]  Daniel DeMenthon,et al.  Range-video fusion and comparison of inverse perspective algorithms in static images , 1990, IEEE Trans. Syst. Man Cybern..

[2]  Surender K Kenue,et al.  Lanelok: Detection Of Lane Boundaries And Vehicle Tracking Using Image-Processing Techniques - Part I: Hough-Transform, Region-Tracing And Correlation Algorithms , 1989, Other Conferences.

[3]  Larry S. Davis,et al.  A visual navigation system for autonomous land vehicles , 1987, IEEE J. Robotics Autom..

[4]  Todd R. Kushner,et al.  Progress In Road Intersection Detection For Autonomous Vehicle Navigation , 1987, Other Conferences.

[5]  Jay Gowdy,et al.  Annotated maps for autonomous land vehicles , 1990, 1990 IEEE International Conference on Systems, Man, and Cybernetics Conference Proceedings.

[6]  John A. Michon,et al.  A critical view of driver behavior models: What do we know , 1985 .

[7]  Matthew Turk,et al.  VITS-A Vision System for Autonomous Land Vehicle Navigation , 1988, IEEE Trans. Pattern Anal. Mach. Intell..

[8]  K. Kanatani,et al.  Reconstruction of 3-D road geometry from images for autonomous land vehicles , 1990, IEEE Trans. Robotics Autom..

[9]  Uwe Franke,et al.  LONG DISTANCE DRIVING WITH THE DAIMLER-BENZ AUTONOMOUS VEHICLE VITA , 1991 .

[10]  Dean A. Pomerleau,et al.  Neural Network Perception for Mobile Robot Guidance , 1993 .

[11]  Karl Kluge YARF: An Open-Ended Framework for Robot Road Following , 1993 .

[12]  Ramesh Jain,et al.  A parallel architecture for curvature-based road scene classification , 1992 .

[13]  Surender K. Kenue Lanelok: Detection Of Lane Boundaries And Vehicle Tracking Using Image-Processing Techniques -Part II: Template Matching Algorithms , 1990, Other Conferences.

[14]  Y. Goto,et al.  CMU Sidewalk Navigation System: A Blackboard-Based Outdoor Navigation System Using Sensor Fusion with Colored-Range Images , 1986, FJCC.

[15]  B. T. Thomas,et al.  Finding road lane boundaries for vision-guided vehicle navigation , 1992 .

[16]  Ernst D. Dickmanns,et al.  Distributed Scene Analysis For Autonomous Road Vehicle Guidance , 1987, Other Conferences.

[17]  Peter J. Rousseeuw,et al.  Robust regression and outlier detection , 1987 .

[18]  Henry Schneiderman,et al.  Visual processing for autonomous driving , 1992, [1992] Proceedings IEEE Workshop on Applications of Computer Vision.

[19]  Charles E. Thorpe,et al.  SCARF: a color vision system that tracks roads and intersections , 1993, IEEE Trans. Robotics Autom..

[20]  Ernst D. Dickmanns,et al.  Recursive 3-D Road and Relative Ego-State Recognition , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[21]  Larry S. Davis,et al.  Reconstruction of a road by local image matches and global 3D optimization , 1990, Proceedings., IEEE International Conference on Robotics and Automation.