Medical image fusion method based on dense block and deep convolutional generative adversarial network

Medical image fusion techniques can further improve the accuracy and time efficiency of clinical diagnosis by obtaining comprehensive salient features and detail information from medical images of different modalities. We propose a novel medical image fusion algorithm based on deep convolutional generative adversarial network and dense block models, which is used to generate fusion images with rich information. Specifically, this network architecture integrates two modules: an image generator module based on dense block and encoder–decoder and a discriminator module. In this paper, we use the encoder network to extract the image features, process the features using fusion rule based on the Lmax norm, and use it as the input of the decoder network to obtain the final fusion image. This method can overcome the weaknesses of the active layer measurement by manual design in the traditional methods and can process the information of the intermediate layer according to the dense blocks to avoid the loss of information. Besides, this paper uses detail loss and structural similarity loss to construct the loss function, which is used to improve the extraction ability of target information and edge detail information related to images. Experiments on the public clinical diagnostic medical image dataset show that the proposed algorithm not only has excellent detail preserve characteristics but also can suppress the artificial effects. The experiment results are better than other comparison methods in different types of evaluation.

[1]  Vinod Kumar,et al.  Nonsubsampled shearlet based CT and MR medical image fusion using biologically inspired spiking neural network , 2015, Biomed. Signal Process. Control..

[2]  Yu-Chiang Frank Wang,et al.  Exploring Visual and Motion Saliency for Automatic Video Object Extraction , 2013, IEEE Transactions on Image Processing.

[3]  Zheng Liu,et al.  Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain , 2013, IEEE Transactions on Multimedia.

[4]  Junjun Jiang,et al.  FusionGAN: A generative adversarial network for infrared and visible image fusion , 2019, Inf. Fusion.

[5]  Rick S. Blum,et al.  A new automated quality assessment algorithm for image fusion , 2009, Image Vis. Comput..

[6]  Xiao-Jun Wu,et al.  MSDNet for Medical Image Fusion , 2019, ICIG.

[7]  Liguo Zhang,et al.  MCFNet: Multi-Layer Concatenation Fusion Network for Medical Images Fusion , 2019, IEEE Sensors Journal.

[8]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[9]  Malay Kumar Kundu,et al.  NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency , 2012, Medical & Biological Engineering & Computing.

[10]  Yu Liu,et al.  IFCNN: A general image fusion framework based on convolutional neural network , 2020, Inf. Fusion.

[11]  Fausto W. Acerbi-Junior,et al.  The assessment of multi-sensor image fusion using wavelet transforms for mapping the Brazilian Savanna , 2006 .

[12]  Linlin Liu,et al.  ClothingOut: a category-supervised GAN model for clothing segmentation and retrieval , 2018, Neural Computing and Applications.

[13]  B. S. Manjunath,et al.  Multisensor Image Fusion Using the Wavelet Transform , 1995, CVGIP Graph. Model. Image Process..

[14]  Huiqian Du,et al.  Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion , 2017, Neurocomputing.

[15]  Xuan Liu,et al.  Image fusion based on shearlet transform and regional features , 2014 .

[16]  屈小波 Xiaobo Qu,et al.  Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse Coupled Neural Networks in Nonsubsampled Contourlet Transform Domain , 2008 .

[17]  Kun Liu,et al.  Contourlet transform for image fusion using cycle spinning , 2011 .

[18]  Rabab Kreidieh Ward,et al.  Image Fusion With Convolutional Sparse Representation , 2016, IEEE Signal Processing Letters.

[19]  Xin Yao,et al.  Objective reduction based on nonlinear correlation information entropy , 2016, Soft Comput..

[20]  Hui Li,et al.  DenseFuse: A Fusion Approach to Infrared and Visible Images , 2018, IEEE Transactions on Image Processing.

[21]  Dong Liu,et al.  Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model , 2018, Medical & Biological Engineering & Computing.

[22]  Wei Yu,et al.  Infrared and visible image fusion via detail preserving adversarial learning , 2020, Inf. Fusion.

[23]  Gonzalo Pajares,et al.  A wavelet-based image fusion tutorial , 2004, Pattern Recognit..

[24]  Bin Xiao,et al.  Union Laplacian pyramid with multiple features for medical image fusion , 2016, Neurocomputing.

[25]  Shutao Li,et al.  Image Fusion With Guided Filtering , 2013, IEEE Transactions on Image Processing.

[26]  Rabab K. Ward,et al.  Medical Image Fusion via Convolutional Sparsity Based Morphological Component Analysis , 2019, IEEE Signal Processing Letters.

[27]  Weisheng Li,et al.  Anatomical-Functional Image Fusion by Information of Interest in Local Laplacian Filtering Domain , 2017, IEEE Transactions on Image Processing.

[28]  Mei Yang,et al.  A novel algorithm of image fusion using shearlets , 2011 .

[29]  Alexander Toet,et al.  A morphological pyramidal image decomposition , 1989, Pattern Recognit. Lett..

[30]  G. Qu,et al.  Information measure for performance of image fusion , 2002 .