Multisensor data fusion for advanced driver assistance systems - the Active Safety Car project

Driver assistance systems support overstrained and affected drivers and become more and more essential for series-production vehicles. Object detection and segmentation is one of the most challenging research topics in this field. In order to warn the driver or automatically break before a potential collision, objects intersecting the path of the host vehicle have to be detected and classified. Most recently developed approaches are based on two dimensional image processing, sometimes in combination with a tracking algorithm associating detections in consecutive frames to one and the same object. Further robustness is achieved by multisensor data fusion, i.e. information by two or more different sensors (e.g. camera and radar data) are fused in order to get a much more reliable result. Another aspect for safety applications is communication between cars, which provides additional sensor locations and thus also requires data fusion technology. Two different approaches for data fusion are proposed and first results are presented.

[1]  Vincent Lepetit,et al.  Monocular Model-Based 3D Tracking of Rigid Objects: A Survey , 2005, Found. Trends Comput. Graph. Vis..

[2]  Albrecht Glasmachers,et al.  Laser phase shift distance meter for vision based driver assistance systems , 2010, 2010 IEEE International Conference on Imaging Systems and Techniques.

[3]  Ashutosh Saxena,et al.  Depth Estimation Using Monocular and Stereo Cues , 2007, IJCAI.

[4]  Anton Kummert,et al.  On Occlusion-Handling for People Detection Fusion in Multi-camera Networks , 2011, MCSS.

[5]  Anton Kummert,et al.  Radar-Vision Fusion with an Application to Car-Following using an Improved AdaBoost Detection Algorithm , 2007, 2007 IEEE Intelligent Transportation Systems Conference.

[6]  A. Takahashi,et al.  Introduction of Honda ASV-2 (advanced safety vehicle-phase 2) , 2000, Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511).

[7]  J. Little,et al.  Inverse perspective mapping simplifies optical flow computation and obstacle detection , 2004, Biological Cybernetics.