A New Automated Quality Assessment Algorithm for Night Vision Image Fusion

In this paper we propose a perceptual quality evaluation method for image fusion which is based on human visual system (HVS) models. Our method assesses the image quality of a fused image using the following steps. First the source and fused images are filtered by a contrast sensitivity function (CSF) after which a local contrast map is computed for each image. Second, a contrast preservation map is generated to describe the relationship between the fused image and each source image. Finally, the preservation maps are weighted by a saliency map to obtain an overall quality map. The mean of the quality map indicates the quality for the fused image. Experimental results compare the predictions made by our algorithm with human perceptual evaluations for several different parameter settings in our algorithm. For some specific parameter settings, we find our algorithm provides better predictions, which are more closely matched to human perceptual evaluations, than the existing algorithms.

[1]  Costas Xydeas,et al.  On the effects of sensor noise in pixel-level image fusion performance , 2000, Proceedings of the Third International Conference on Information Fusion.

[2]  R.S. Blum,et al.  Experimental tests of image fusion for night vision , 2005, 2005 7th International Conference on Information Fusion.

[3]  J. M. Foley,et al.  Human luminance pattern-vision mechanisms: masking experiments require a new model. , 1994, Journal of the Optical Society of America. A, Optics, image science, and vision.

[4]  E. Peli Contrast in complex images. , 1990, Journal of the Optical Society of America. A, Optics and image science.

[5]  Pramod K. Varshney,et al.  A perceptual quality metric for image fusion based on regional information , 2005, SPIE Defense + Commercial Sensing.

[6]  Vladimir Petrovic,et al.  Objective image fusion performance measure , 2000 .

[7]  A. Bovik,et al.  A universal image quality index , 2002, IEEE Signal Processing Letters.

[8]  Pramod K. Varshney,et al.  Registration and fusion of infrared and millimeter wave images for concealed weapon detection , 1999, Proceedings 1999 International Conference on Image Processing (Cat. 99CH36348).

[9]  G. Qu,et al.  Information measure for performance of image fusion , 2002 .

[10]  J. M. Foley,et al.  Contrast masking in human vision. , 1980, Journal of the Optical Society of America.

[11]  Thrasyvoulos N. Pappas,et al.  Perceptual criteria for image quality evaluation , 2005 .

[12]  Christine Pohl,et al.  Multisensor image fusion in remote sensing: concepts, methods and applications , 1998 .

[13]  Wilson S. Geisler,et al.  Natural contrast statistics and the selection of visual fixations , 2005, IEEE International Conference on Image Processing 2005.

[14]  Firooz Sadjadi,et al.  Comparative Image Fusion Analysais , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops.

[15]  A. Watson,et al.  A standard model for foveal detection of spatial contrast. , 2005, Journal of vision.

[16]  Oliver Rockinger,et al.  Image sequence fusion using a shift-invariant wavelet transform , 1997, Proceedings of International Conference on Image Processing.

[17]  Henk J. A. M. Heijmans,et al.  A new quality metric for image fusion , 2003, Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429).

[18]  Alexander Toet,et al.  A morphological pyramidal image decomposition , 1989, Pattern Recognit. Lett..

[19]  Stefan Winkler,et al.  Digital Video Quality: Vision Models and Metrics , 2005 .