On two-lane rural roads, a large number of overtaking accidents tend to happen. Those cause many serious casualties and fatalities. In many cases, inaccurate assessment of the traffic situation is identified as the major cause. Hence, the development of a driver assistance concept for those scenarios promises a high safety benefit. This paper shows the sensory and data-fusion approach to a system which provides this assistance function. The level of information about the car's environment, which is required for overtaking assistance, depends on the phase of the overtaking maneuver. In early stages, i.e. when the overtaking vehicle is in the situation just before the initial lane change, it is only necessary to get information about oncoming cars in the distance. For late stages in the scenario, i.e. when the overtaking speed is too low, dangerous situations can arise due to the fact that the gap in front of the car to be overtaken cannot be reached any more. In this case, it is necessary to calculate an evasion path, based on the perception of unoccupied space in front of the overtaking car. A fusion of different automotive sensors is proposed in order to cover all parts of the overtaking scenario in the system's perception: Information about independently moving objects in front of the car is gained from a radar-device by exploiting the Doppler shift. Moreover, we employ a CMOS-camera sensor. Different algorithms are run on the camera's video stream: a texture-based free space detector as well as an object detection algorithm. Details of those algorithms are shown in further sections of the paper. The proposed approach fuses object information from raw radar object data and the output of a video based object detection algorithm. As a result of this mid-level fusion, a list of moving objects in the whole range of the targeted field of view is obtained. For the free space part, a typical occupancy grid representation of the front car environment is employed for shorter distances in the field of view. This area is relevant for evasion maneuvers. The grid is filled by the camera free-space detection and is corrected with the known objects from the object- list. Thus, a high-level grid fusion is obtained. In particular, it is shown that the fusion of both sensor inputs is beneficial. First, it is possible to detect oncoming vehicles from a relatively high range with the radar device, whereas secondly, object detection from video frames becomes increasingly difficult for distant cars. In close range, both sensors benefit from the fusion of multiple cues. False positive detections can be filtered out and video object detections allow for an improved estimation of other vehicles' widths. Experimental results on real world data which has been recorded with a typical onboard system will be given in the results section.
[1]
James Llinas,et al.
Handbook of Multisensor Data Fusion
,
2001
.
[2]
Cordelia Schmid,et al.
Scale & Affine Invariant Interest Point Detectors
,
2004,
International Journal of Computer Vision.
[3]
Wolfgang Niehsen,et al.
Informationsfusion für Fahrerassistenzsysteme
,
2005
.
[4]
Bernt Schiele,et al.
Interleaving Object Categorization and Segmentation
,
2006,
Cognitive Vision Systems.
[5]
Sebastian Thrun,et al.
Self-supervised Monocular Road Detection in Desert Terrain
,
2006,
Robotics: Science and Systems.
[6]
Tomaso A. Poggio,et al.
A Trainable System for Object Detection
,
2000,
International Journal of Computer Vision.
[7]
Bill Triggs,et al.
Histograms of oriented gradients for human detection
,
2005,
2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).
[8]
C. Laugier,et al.
Grid based fusion of off-board cameras
,
2006,
2006 IEEE Intelligent Vehicles Symposium.
[9]
Cordelia Schmid,et al.
A Performance Evaluation of Local Descriptors
,
2005,
IEEE Trans. Pattern Anal. Mach. Intell..
[10]
Dariu Gavrila,et al.
An Experimental Study on Pedestrian Classification
,
2006,
IEEE Transactions on Pattern Analysis and Machine Intelligence.
[11]
Luc Van Gool,et al.
Dynamic 3D Scene Analysis from a Moving Vehicle
,
2007,
2007 IEEE Conference on Computer Vision and Pattern Recognition.
[12]
Andrew McCallum,et al.
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
,
2001,
ICML.
[13]
Pietro Perona,et al.
Object class recognition by unsupervised scale-invariant learning
,
2003,
2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..
[14]
Paul A. Viola,et al.
Robust Real-Time Face Detection
,
2001,
International Journal of Computer Vision.
[15]
Alberto Elfes,et al.
Using occupancy grids for mobile robot perception and navigation
,
1989,
Computer.
[16]
Takeo Kanade,et al.
Object Detection Using the Statistics of Parts
,
2004,
International Journal of Computer Vision.
[17]
Amnon Shashua,et al.
Off-road Path Following using Region Classification and Geometric Projection Constraints
,
2006,
2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).
[18]
Bernt Schiele,et al.
Sliding-Windows for Rapid Object Class Localization: A Parallel Technique
,
2008,
DAGM-Symposium.