A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion

In this paper we present a framework for the fusion of radar and image information. In the case considered here we combine information from multiple closerange radars to one fused radar measurement using the overlap region of the individual radars. This step is performed automatically using a feature based matching technique. Additionally, we use multiple 2D/3D cameras that generate (color) image and distance information. We show how to fuse these heterogeneous sensors in the context of airport runway surveillance. A possible application of this is the automatic detection of midget objects (e.g. screws) on airfields. We outline how to generate an adaptive background model for the situation on the runway from the fused sensor information. Unwanted objects on the airfield can then be detected by change detection.

[1]  W. Eric L. Grimson,et al.  Adaptive background mixture models for real-time tracking , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[2]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[3]  S. E. Ghobadi,et al.  First steps in enhancing 3D vision technique using 2D/3D sensors , 2006 .

[4]  Jiri Matas,et al.  Robust wide-baseline stereo from maximally stable extremal regions , 2004, Image Vis. Comput..

[5]  Cordelia Schmid,et al.  A performance evaluation of local descriptors , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Larry S. Davis,et al.  Non-parametric Model for Background Subtraction , 2000, ECCV.

[7]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[8]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .