Infrared and visible image fusion method based on saliency detection and target-enhancement

To resolve the issue of blurred backgrounds and fuzzy targets in using the infrared and visible image fusion algorithm, this paper proposes a new method for image fusion based on target-enhancement. First, average filtering is used to obtain rough estimation of the transmission rate, which is refined by calculating the images’ statistical information. Further, a final target-enhanced infrared image is obtained using the atmospheric scattering model. Then, the edge of the target-enhanced infrared image and the visible image is detected and separated using the improved edge detection. The fusion rule based on binary information is used for the edge part, and the fusion rule based on the ratio weighting analysis is used for the non-edge part. Experimental results show that the image fusion algorithm based on target-enhancement not only highlights the target information of an infrared image, but also retains the detailed information of the visible image as much as possible. Additionally, the fused image has better visual effects and higher objective quality evaluation indexes.

[1]  Miguel Oliveira,et al.  Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study , 2016, Sensors.

[2]  Ravindra Dhuli,et al.  Two-scale image fusion of visible and infrared images using saliency detection , 2016 .

[3]  Yan Wang,et al.  Infrared and multi-type images fusion algorithm based on contrast pyramid transform , 2016 .

[4]  Shutao Li,et al.  Group-Sparse Representation With Dictionary Learning for Medical Image Denoising and Fusion , 2012, IEEE Transactions on Biomedical Engineering.

[5]  Jiayi Ma,et al.  Infrared and visible image fusion via gradient transfer and total variation minimization , 2016, Inf. Fusion.

[6]  Shutao Li,et al.  Pixel-level image fusion: A survey of the state of the art , 2017, Inf. Fusion.

[7]  Xiaohai He,et al.  Infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter , 2015 .

[8]  Hassan Ghassemian,et al.  Combining the spectral PCA and spatial PCA fusion methods by an optimal filter , 2016, Inf. Fusion.

[9]  Qingjie Zhao,et al.  Saliency Detection Using Sparse and Nonlinear Feature Representation , 2014, TheScientificWorldJournal.

[10]  Yu Liu,et al.  A general framework for image fusion based on multi-scale transform and sparse representation , 2015, Inf. Fusion.

[11]  Hua Zong,et al.  Infrared and visible image fusion based on visual saliency map and weighted least square optimization , 2017 .

[12]  Baohua Zhang,et al.  The infrared and visible image fusion algorithm based on target separation and sparse representation , 2014 .

[13]  Lei Wang,et al.  Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients , 2014, Inf. Fusion.

[14]  Qiang Zhang,et al.  Robust Multi-Focus Image Fusion Using Multi-Task Sparse Representation and Spatial Context , 2016, IEEE Transactions on Image Processing.

[15]  Chunhui Zhao,et al.  A fast fusion scheme for infrared and visible light images in NSCT domain , 2015 .