DenseFuse: A Fusion Approach to Infrared and Visible Images

In this paper, we present a novel deep learning architecture for infrared and visible images fusion problems. In contrast to conventional convolutional networks, our encoding network is combined with convolutional layers, a fusion layer, and dense block in which the output of each layer is connected to every other layer. We attempt to use this architecture to get more useful features from source images in the encoding process, and two fusion layers (fusion strategies) are designed to fuse these features. Finally, the fused image is reconstructed by a decoder. Compared with existing fusion methods, the proposed fusion method achieves the state-of-the-art performance in objective and subjective assessment.

[1]  R. Venkatesh Babu,et al.  DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[2]  André Kaup,et al.  Automatic Registration of Images With Inconsistent Content Through Line-Support Region Segmentation and Geometrical Outlier Removal , 2018, IEEE Transactions on Image Processing.

[3]  Yun He,et al.  A multiscale approach to pixel-level image fusion , 2005, Integr. Comput. Aided Eng..

[4]  Ravindra Dhuli,et al.  Two-scale image fusion of visible and infrared images using saliency detection , 2016 .

[5]  Pietro Perona,et al.  Microsoft COCO: Common Objects in Context , 2014, ECCV.

[6]  Rabab Kreidieh Ward,et al.  Image Fusion With Convolutional Sparse Representation , 2016, IEEE Signal Processing Letters.

[7]  Tianshuang Qiu,et al.  Medical image fusion based on sparse representation of classified image patches , 2017, Biomed. Signal Process. Control..

[8]  Hua Zong,et al.  Infrared and visible image fusion based on visual saliency map and weighted least square optimization , 2017 .

[9]  Mohammad Haghighat,et al.  Fast-FMI: Non-reference image fusion metric , 2014, 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT).

[10]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Shutao Li,et al.  Image Fusion With Guided Filtering , 2013, IEEE Transactions on Image Processing.

[12]  Hong Zhao,et al.  Image Fusion With Cosparse Analysis Operator , 2017, IEEE Signal Processing Letters.

[13]  Toet Alexander,et al.  TNO Image Fusion Dataset , 2014 .

[14]  Namil Kim,et al.  Multispectral pedestrian detection: Benchmark dataset and baseline , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Vladimir Petrovic,et al.  Objective image fusion performance measure , 2000 .

[16]  Hui Li,et al.  Multi-focus Image Fusion Using Dictionary Learning and Low-Rank Representation , 2017, ICIG.

[17]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[18]  Haifeng Li,et al.  Dictionary learning method for joint sparse representation-based image fusion , 2013 .

[19]  B. K. Shreyamsha Kumar Image fusion based on pixel significance using cross bilateral filter , 2015 .

[20]  Hui-Liang Shen,et al.  Normalized Total Gradient: A New Measure for Multispectral Image Registration , 2017, IEEE Transactions on Image Processing.

[21]  V. Aslantaş,et al.  A new image quality metric for image fusion: The sum of the correlations of differences , 2015 .

[22]  Yang Yang,et al.  Multi-Temporal Remote Sensing Image Registration Using Deep Convolutional Features , 2018, IEEE Access.

[23]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[24]  Lei Wang,et al.  EGGDD: An explicit dependency model for multi-modal medical image fusion in shift-invariant shearlet transform domain , 2014, Inf. Fusion.

[25]  Jiayi Ma,et al.  Infrared and visible image fusion via gradient transfer and total variation minimization , 2016, Inf. Fusion.

[26]  Shutao Li,et al.  Pixel-level image fusion: A survey of the state of the art , 2017, Inf. Fusion.

[27]  B. K. Shreyamsha Kumar,et al.  Image fusion based on pixel significance using cross bilateral filter , 2013, Signal, Image and Video Processing.

[28]  Yu Liu,et al.  Multi-focus image fusion with a deep convolutional neural network , 2017, Inf. Fusion.

[29]  Ming Zhu,et al.  Multifocus color image fusion using quaternion wavelet transform , 2012, 2012 5th International Congress on Image and Signal Processing.

[30]  Yue Qi,et al.  Infrared and visible image fusion method based on saliency detection in sparse domain , 2017 .

[31]  Kai Zeng,et al.  Perceptual Quality Assessment for Multi-Exposure Image Fusion , 2015, IEEE Transactions on Image Processing.

[32]  Sun Li,et al.  Multi-scale weighted gradient-based fusion for multi-focus images , 2014, Inf. Fusion.

[33]  Vladimir Petrovic,et al.  Objective image fusion performance measures , 2008 .

[34]  Shuyuan Yang,et al.  Image fusion based on a new contourlet packet , 2010, Inf. Fusion.