3D vehicle sensor based on monocular vision

Determining the position of other vehicles on the road is a key information to help driver assistance systems to increase driver's safety. Accordingly, the work presented in this paper addresses the problem of detecting the vehicles in front of our own one and estimating their 3D position by using a single monochrome camera. Rather than using predefined high level image features as symmetry, shadow search, etc., our proposal for the vehicle detection is based on a learning process that determines, from a training set, which are the best features to distinguish vehicles from non-vehicles. To compute 3D information with a single camera a key point consists of knowing the position where the horizon projects onto the image. However, this position can change in every frame and is difficult to determine. In this paper we study the coupling between the perceived horizon and the actual width of vehicles in order to reduce the uncertainty in their estimated 3D position derived from an unknown horizon.

[1]  Dan Roth,et al.  Learning to detect objects in images via a sparse, part-based representation , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[3]  P. C. Antonello,et al.  Multi-resolution vehicle detection using artificial vision , 2004, IEEE Intelligent Vehicles Symposium, 2004.

[4]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[5]  Jean-Yves Bouguet,et al.  Camera calibration toolbox for matlab , 2001 .

[6]  Markus Maurer,et al.  A compact vision system for road vehicle guidance , 1996, Proceedings of 13th International Conference on Pattern Recognition.