Infrared and visible image fusion method based on saliency detection in sparse domain

Abstract Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.

[1]  Chunhui Zhao,et al.  A fast fusion scheme for infrared and visible light images in NSCT domain , 2015 .

[2]  Richard Bamler,et al.  A Sparse Image Fusion Algorithm With Application to Pan-Sharpening , 2013, IEEE Transactions on Geoscience and Remote Sensing.

[3]  Jiayi Ma,et al.  Infrared and visible image fusion via gradient transfer and total variation minimization , 2016, Inf. Fusion.

[4]  Ravindra Dhuli,et al.  Two-scale image fusion of visible and infrared images using saliency detection , 2016 .

[5]  Yan Wang,et al.  Infrared and multi-type images fusion algorithm based on contrast pyramid transform , 2016 .

[6]  Miguel Oliveira,et al.  Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study , 2016, Sensors.

[7]  Baohua Zhang,et al.  The infrared and visible image fusion algorithm based on target separation and sparse representation , 2014 .

[8]  Henk J. A. M. Heijmans,et al.  A new quality metric for image fusion , 2003, Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429).

[9]  Jianping Fan,et al.  Fusion method for infrared and visible images by using non-negative sparse representation , 2014 .

[10]  Shutao Li,et al.  Pixel-level image fusion: A survey of the state of the art , 2017, Inf. Fusion.

[11]  刘军 Liu Jun,et al.  Feature-based Remote Sensing Image Fusion Quality Metrics Using Structure Similarity , 2011 .

[12]  Ming Dai,et al.  Multifocus color image fusion based on quaternion curvelet transform. , 2012, Optics express.

[13]  Lei Wang,et al.  Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients , 2014, Inf. Fusion.

[14]  Hua Zong,et al.  Infrared and visible image fusion based on visual saliency map and weighted least square optimization , 2017 .

[15]  Yue Qi,et al.  Airborne SAR and optical image fusion based on IHS transform and joint non-negative sparse representation , 2016, 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS).

[16]  Shutao Li,et al.  Group-Sparse Representation With Dictionary Learning for Medical Image Denoising and Fusion , 2012, IEEE Transactions on Biomedical Engineering.

[17]  Vladimir Petrovic,et al.  Objective image fusion performance measure , 2000 .

[18]  Qiang Zhang,et al.  Robust Multi-Focus Image Fusion Using Multi-Task Sparse Representation and Spatial Context , 2016, IEEE Transactions on Image Processing.

[19]  Lihi Zelnik-Manor,et al.  Context-Aware Saliency Detection , 2012, IEEE Trans. Pattern Anal. Mach. Intell..

[20]  Yu Liu,et al.  A general framework for image fusion based on multi-scale transform and sparse representation , 2015, Inf. Fusion.

[21]  Myungjin Choi,et al.  A new intensity-hue-saturation fusion approach to image fusion with a tradeoff parameter , 2006, IEEE Trans. Geosci. Remote. Sens..

[22]  Haifeng Li,et al.  Dictionary learning method for joint sparse representation-based image fusion , 2013 .

[23]  Xiaohai He,et al.  Infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter , 2015 .

[24]  Vladimir S. Petrovic,et al.  Subjective tests for image fusion evaluation and objective metric validation , 2007, Inf. Fusion.

[25]  Hassan Ghassemian,et al.  Combining the spectral PCA and spatial PCA fusion methods by an optimal filter , 2016, Inf. Fusion.

[26]  Qingjie Zhao,et al.  Saliency Detection Using Sparse and Nonlinear Feature Representation , 2014, TheScientificWorldJournal.