Multisensor fusion of images for target identification

Multisensor data fusion is an emerging technology applied to defense and non-defense applications. In this paper, a image fusion algorithm using different texture parameters is proposed to identify long-range targets. The method uses a semi-supervised approach for detecting single target from the input images. The procedure consists of three steps: Feature extraction, Feature level fusion and Sensor level fusion. In this study, two methods of texture feature extraction using co-occurrence matrix and run-length matrix are considered. Texture parameters are calculated at each pixel of the selected training image, and target non-target pixels identified manually. Some of the texture features calculated at the target position differ from those in the background. Discriminant analysis is used to perform feature level fusion on the training image, which classify target and non-target pixels. By applying the discriminant function to the feature space of textural parameters, a new image is created. The maxima of this image correspond to target point. The same discriminant function can be applied to the other images for detecting the trained targets regions. Sensor level fusion combines images obtained from feature level fusion of visual and IR images. The method was first tested with synthetically generated images and then with real images. Results are obtained using both co-occurrence and run-length method of texture feature extraction for target identification.