Navigation and Control of an Autonomous Vehicle

Abstract This paper describes the fusion of sensor data for the navigation of an autonomous vehicle as well as two lateral control concepts to track the vehicle along a desired path. The fusion of navigation data is based on information provided by multiple object-detecting sensors. The object data is fused to increase the accuracy and to obtain the vehicle’s state from the relative movement w.r.t. the objects. The presented lateral control methods are an LQG/H2-design and an input-output linearizing algorithm. These control schemes were both implemented on a test vehicle.

[1]  Bernd Hürtgen,et al.  Lane following combining vision and DGPS , 2000, Image Vis. Comput..

[2]  A. Isidori Nonlinear Control Systems , 1985 .

[3]  J. C. Becker Fusion of data from the object-detecting sensors of an autonomous vehicle , 1999, Proceedings 199 IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems (Cat. No.99TH8383).

[4]  P. Khargonekar,et al.  State-space solutions to standard H/sub 2/ and H/sub infinity / control problems , 1989 .

[5]  Jan Becker,et al.  Vehicle guidance for an autonomous vehicle , 1999, Proceedings 199 IEEE/IEEJ/JSAI International Conference on Intelligent Transportation Systems (Cat. No.99TH8383).

[6]  P. Khargonekar,et al.  State-space solutions to standard H2 and H∞ control problems , 1988, 1988 American Control Conference.