Distributed fusion using video sensors on multiple unmanned aerial vehicles

Surveillance and ground target tracking using multiple electro-optical and infrared video sensors onboard unmanned aerial vehicles (UAVs) have drawn a great deal of interest in recent years due to inexpensive video sensors and sensor platforms. In this paper, we compare the convex combination fusion algorithm with the centralized fusion algorithm using a single target and two UAVs. The local tracker for each UAV processes pixel location measurements in the digital image corresponding to the target location on the ground. The video measurement model is based on the perspective transformation and therefore is a nonlinear function of the target position. The measurement model also includes the radial and tangential lens distortions. Each local tracker and the central tracker use an extended Kalman filter with the nearly constant velocity dynamic model. We present numerical results using simulated data from two UAVs with varying levels of process noise power spectral density and pixel location standard deviations. Our results show that the two fusion algorithms are unbiased and the mean square error (MSE) of the convex combination fusion algorithm is close to the MSE of the centralized fusion algorithm. The covariance calculated by the centralized fusion algorithm is close to the MSE and is consistent for most measurement times. However, the covariance calculated by the convex combination fusion algorithm is lower than the MSE due to neglect of the common process noise and is not consistent with the estimation errors.

[1]  Chee-Yee Chong,et al.  Track association and track fusion with nondeterministic target dynamics , 2002 .

[2]  Supun Samarasekera,et al.  Aerial video surveillance and exploitation , 2001, Proc. IEEE.

[3]  H. Opower Multiple view geometry in computer vision , 2002 .

[4]  Quang-Tuan Luong,et al.  Self-Calibration of a Moving Camera from Point Correspondences and Fundamental Matrices , 1997, International Journal of Computer Vision.

[5]  Duane C. Brown,et al.  Close-Range Camera Calibration , 1971 .

[6]  David J. Miller,et al.  Feature-aided multiple target tracking in the image plane , 2006, SPIE Defense + Commercial Sensing.

[7]  Thia Kirubarajan,et al.  Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software , 2001 .

[8]  X. R. Li,et al.  ESTIMATOR'S CREDIBILITY AND ITS MEASURES , 2002 .

[9]  Janne Heikkilä,et al.  A four-step camera calibration procedure with implicit image correction , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[10]  Pablo O. Arambel,et al.  Multiple-hypothesis tracking of multiple ground targets from aerial video with dynamic sensor control , 2004, SPIE Defense + Commercial Sensing.

[11]  Zhengyou Zhang,et al.  Flexible camera calibration by viewing a plane from unknown orientations , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[12]  Mahendra Mallick,et al.  Geolocation using video sensor measurements , 2007, 2007 10th International Conference on Information Fusion.

[13]  Thiagalingam Kirubarajan,et al.  Performance limits of track-to-track fusion versus centralized estimation: theory and application [sensor fusion] , 2003 .