Infrared and Visible Image Fusion via Multi-discriminators Wasserstein Generative Adversarial Network

Generative adversarial network (GAN) has been widely applied to infrared and visible image fusion. However, the existing GAN-based image fusion methods only establish one discriminator in the network to make the fused image capture gradient information from the visible image, which may result in the loss of some infrared intensity information and texture information on the fused images. To solve this problem and improve the performance of GAN, we extend GAN to multiple discriminators and propose an end-to-end multi-discriminators Wasserstein generative adversarial network (MD-WGAN). In this framework, the fused image can preserve major infrared intensity and detail information from the first discriminator, and keep more texture information that existing in visible image from the second discriminator. We also design a texture loss function via local binary patterns to preserve more texture from visible image. The extensive qualitative and quantitative experiments show the advantages of our method compared with other state-of-the-art fusion methods.

[1]  Rabab Kreidieh Ward,et al.  Deep learning for pixel-level image fusion: Recent advances and future prospects , 2018, Inf. Fusion.

[2]  Xinming Tang,et al.  IMAGE FUSION AND IMAGE QUALITY ASSESSMENT OF FUSED IMAGES , 2013 .

[3]  Zhen Li,et al.  Coupled GAN With Relativistic Discriminators for Infrared and Visible Images Fusion , 2019, IEEE Sensors Journal.

[4]  Yu Liu,et al.  A general framework for image fusion based on multi-scale transform and sparse representation , 2015, Inf. Fusion.

[5]  Paul M. de Zeeuw,et al.  Fast saliency-aware multi-modality image fusion , 2013, Neurocomputing.

[6]  Josef Kittler,et al.  Infrared and Visible Image Fusion using a Deep Learning Framework , 2018, 2018 24th International Conference on Pattern Recognition (ICPR).

[7]  Cedric Nishan Canagarajah,et al.  Pixel- and region-based image fusion with complex wavelets , 2007, Inf. Fusion.

[8]  Qiang Zhang,et al.  Multifocus image fusion using the nonsubsampled contourlet transform , 2009, Signal Process..

[9]  Jiayi Ma,et al.  Infrared and visible image fusion methods and applications: A survey , 2018, Inf. Fusion.

[10]  P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .

[11]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[12]  Jiayi Ma,et al.  Infrared and visible image fusion via gradient transfer and total variation minimization , 2016, Inf. Fusion.

[13]  Edward H. Adelson,et al.  The Laplacian Pyramid as a Compact Image Code , 1983, IEEE Trans. Commun..

[14]  Yi Liu,et al.  Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review , 2018, Inf. Fusion.

[15]  Léon Bottou,et al.  Wasserstein Generative Adversarial Networks , 2017, ICML.

[16]  Luciano Alparone,et al.  Remote sensing image fusion using the curvelet transform , 2007, Inf. Fusion.

[17]  Alexander Toet,et al.  Image fusion by a ration of low-pass pyramid , 1989, Pattern Recognit. Lett..

[18]  Yu Liu,et al.  Infrared and visible image fusion with convolutional neural networks , 2017, Int. J. Wavelets Multiresolution Inf. Process..

[19]  Jufeng Zhao,et al.  Fusion of visible and infrared images using global entropy and gradient constrained regularization , 2017 .

[20]  W. Kong,et al.  Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization , 2014 .

[21]  Qi Li,et al.  Infrared image enhancement through saliency feature analysis based on multi-scale decomposition , 2014 .

[22]  Shutao Li,et al.  Group-Sparse Representation With Dictionary Learning for Medical Image Denoising and Fusion , 2012, IEEE Transactions on Biomedical Engineering.

[23]  Aaron C. Courville,et al.  Improved Training of Wasserstein GANs , 2017, NIPS.

[24]  Haitao Yin,et al.  Sparse representation with learned multiscale dictionary for image fusion , 2015, Neurocomputing.

[25]  Junjun Jiang,et al.  FusionGAN: A generative adversarial network for infrared and visible image fusion , 2019, Inf. Fusion.