A Semantic-based Medical Image Fusion Approach

It is necessary for clinicians to comprehensively analyze patient information from different sources. Medical image fusion is a promising approach to providing overall information from medical images of different modalities. However, existing medical image fusion approaches ignore the semantics of images, making the fused image difficult to understand. In this work, we propose a new evaluation index to measure the semantic loss of fused image, and put forward a Fusion W-Net (FW-Net) for multimodal medical image fusion. The experimental results are promising: the fused image generated by our approach greatly reduces the semantic information loss, and has better visual effects in contrast to five state-of-art approaches. Our approach and tool have great potential to be applied in the clinical setting.

[1]  Xun Chen,et al.  Medical Image Fusion With Parameter-Adaptive Pulse Coupled Neural Network in Nonsubsampled Shearlet Transform Domain , 2019, IEEE Transactions on Instrumentation and Measurement.

[2]  Brian Kulis,et al.  W-Net: A Deep Model for Fully Unsupervised Image Segmentation , 2017, ArXiv.

[3]  Léon Bottou,et al.  Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.

[4]  Belur V. Dasarathy,et al.  Medical Image Fusion: A survey of the state of the art , 2013, Inf. Fusion.

[5]  R. Venkatesh Babu,et al.  DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[6]  Vladimir S. Petrovic,et al.  Subjective tests for image fusion evaluation and objective metric validation , 2007, Inf. Fusion.

[7]  Pascal Vincent,et al.  Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..

[8]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[9]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[10]  David Vázquez,et al.  PixelVAE: A Latent Variable Model for Natural Images , 2016, ICLR.

[11]  Kai Zeng,et al.  Perceptual Quality Assessment for Multi-Exposure Image Fusion , 2015, IEEE Transactions on Image Processing.

[12]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[13]  Thomas Brox,et al.  3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation , 2016, MICCAI.

[14]  P. J. Burt,et al.  The Pyramid as a Structure for Efficient Computation , 1984 .

[15]  Shutao Li,et al.  Image Fusion With Guided Filtering , 2013, IEEE Transactions on Image Processing.

[16]  Li Fei-Fei,et al.  Perceptual Losses for Real-Time Style Transfer and Super-Resolution , 2016, ECCV.

[17]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[18]  Malay Kumar Kundu,et al.  Corrections to "A Neuro-Fuzzy Approach for Medical Image Fusion" , 2015, IEEE Trans. Biomed. Eng..

[19]  Brendan J. Frey,et al.  PixelGAN Autoencoders , 2017, NIPS.

[20]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[21]  Peter J. Burt,et al.  Enhanced image capture through fusion , 1993, 1993 (4th) International Conference on Computer Vision.

[22]  Dana H. Ballard,et al.  Modular Learning in Neural Networks , 1987, AAAI.

[23]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[24]  Alán Aspuru-Guzik,et al.  Quantum autoencoders for efficient compression of quantum data , 2016, 1612.02806.

[25]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  C. Claussen,et al.  Simultaneous Mr/pet Imaging of the Human Brain: Feasibility Study 1 , 2022 .

[27]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[28]  Yu Liu,et al.  A medical image fusion method based on convolutional neural networks , 2017, 2017 20th International Conference on Information Fusion (Fusion).

[29]  Olivier Bachem,et al.  Recent Advances in Autoencoder-Based Representation Learning , 2018, ArXiv.

[30]  Erik Reinhard,et al.  High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting , 2010 .

[31]  Pramod K. Varshney,et al.  A human perception inspired quality metric for image fusion based on regional information , 2007, Inf. Fusion.

[32]  G. Qu,et al.  Information measure for performance of image fusion , 2002 .

[33]  Lucas Theis,et al.  Lossy Image Compression with Compressive Autoencoders , 2017, ICLR.

[34]  Torsten Kuwert,et al.  Hybrid imaging by SPECT/CT and PET/CT: proven outcomes in cancer imaging. , 2009, Seminars in nuclear medicine.

[35]  Zheng Liu,et al.  Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain , 2013, IEEE Transactions on Multimedia.

[36]  David Minnen,et al.  Variable Rate Image Compression with Recurrent Neural Networks , 2015, ICLR.