Fusion of intensity, texture, and color in video tracking based on mutual information

Next-generation reconnaissance systems (NGRS) offer dynamic tasking of a menu of sensor modalities such as video, multi/hyper-spectral and polarization data. A key issue is how best to exploit these modes in time critical scenarios such as target tracking and event detection. It is essential to be able to represent diverse sensor content in a unified measurement space so that the contribution of each modality can be evaluated in terms of its contribution to the exploitation task. In this paper, mutual information is used to represent the content of individual sensor channels. A series of experiments on video tracking have been carried out to demonstrate the effectiveness of mutual information as a fusion framework. These experiments quantify the relative information content of intensity, color, and polarization image channels.

[1]  Pramod K. Varshney,et al.  On registration of regions of interest (ROI) in video sequences , 2003, Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance, 2003..

[2]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[3]  Paul A. Viola,et al.  Alignment by Maximization of Mutual Information , 1997, International Journal of Computer Vision.

[4]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.