Noise Adaptation Generative Adversarial Network for Medical Image Analysis

Machine learning has been widely used in medical image analysis under an assumption that the training and test data are under the same feature distributions. However, medical images from difference devices or the same device with different parameter settings are often contaminated with different amount and types of noises, which violate the above assumption. Therefore, the models trained using data from one device or setting often fail to work for that from another. Moreover, it is very expensive and tedious to label data and re-train models for all different devices or settings. To overcome this noise adaptation issue, it is necessary to leverage on the models trained with data from one device or setting for new data. In this paper, we reformulate this noise adaptation task as an image-to-image translation task such that the noise patterns from the test data are modified to be similar to those from the training data while the contents of the data are unchanged. In this paper, we propose a novel Noise Adaptation Generative Adversarial Network (NAGAN), which contains a generator and two discriminators. The generator aims to map the data from source domain to target domain. Among the two discriminators, one discriminator enforces the generated images to have the same noise patterns as those from the target domain, and the second discriminator enforces the content to be preserved in the generated images. We apply the proposed NAGAN on both optical coherence tomography (OCT) images and ultrasound images. Results show that the method is able to translate the noise style. In addition, we also evaluate our proposed method with segmentation task in OCT and classification task in ultrasound. The experimental results show that the proposed NAGAN improves the analysis outcome.

[1]  J. Schmitt,et al.  Speckle in optical coherence tomography. , 1999, Journal of biomedical optics.

[2]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[3]  Ivor W. Tsang,et al.  Domain Transfer Multiple Kernel Learning , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Jonathon Shlens,et al.  Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.

[5]  Shenghua Gao,et al.  CE-Net: Context Encoder Network for 2D Medical Image Segmentation , 2019, IEEE Transactions on Medical Imaging.

[6]  G. R. Suresh,et al.  Speckle Noise Reduction in Ultrasound Images by Wavelet Thresholding based on Weighted Variance , 2009 .

[7]  Yan Xu,et al.  Deep learning of feature representation with multiple instance learning for medical image analysis , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[8]  Bernt Schiele,et al.  Generative Adversarial Text to Image Synthesis , 2016, ICML.

[9]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[10]  Leon A. Gatys,et al.  Image Style Transfer Using Convolutional Neural Networks , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Talha Iqbal,et al.  Generative Adversarial Network for Medical Images (MI-GAN) , 2018, Journal of Medical Systems.

[12]  Alexei A. Efros,et al.  Context Encoders: Feature Learning by Inpainting , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Nasser M. Nasrabadi,et al.  Facial Attributes Guided Deep Sketch-to-Photo Synthesis , 2018, 2018 IEEE Winter Applications of Computer Vision Workshops (WACVW).

[14]  Trevor Darrell,et al.  DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition , 2013, ICML.

[15]  Andrea Vedaldi,et al.  Texture Networks: Feed-forward Synthesis of Textures and Stylized Images , 2016, ICML.

[16]  Michael I. Jordan,et al.  Deep Transfer Learning with Joint Adaptation Networks , 2016, ICML.

[17]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[18]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[20]  Trevor Darrell,et al.  Adversarial Feature Learning , 2016, ICLR.

[21]  Fisher Yu,et al.  Scribbler: Controlling Deep Image Synthesis with Sketch and Color , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Pieter Abbeel,et al.  InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.

[23]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[24]  Sina Farsiu,et al.  Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images. , 2012, Investigative ophthalmology & visual science.

[25]  Joseph M. Schmitt,et al.  Speckle noise reduction for optical coherence tomography , 1998, European Conference on Biomedical Optics.

[26]  Silvia Conforto,et al.  Learnable despeckling framework for optical coherence tomography images , 2017, Journal of biomedical optics.

[27]  Jun Cheng,et al.  Sparse Range-Constrained Learning and Its Application for Medical Image Grading , 2018, IEEE Transactions on Medical Imaging.

[28]  Christian Ledig,et al.  Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Camille Couprie,et al.  Semantic Segmentation using Adversarial Networks , 2016, NIPS 2016.

[30]  Dacheng Tao,et al.  Speckle Reduction in 3D Optical Coherence Tomography of Retina by A-Scan Reconstruction , 2016, IEEE Trans. Medical Imaging.

[31]  Esa Rahtu,et al.  Learning image-to-image translation using paired and unpaired training samples , 2018, ACCV.

[32]  Alan L. Yuille,et al.  Adversarial Examples for Semantic Segmentation and Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[33]  Ping Tan,et al.  DualGAN: Unsupervised Dual Learning for Image-to-Image Translation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[34]  Hyunsoo Kim,et al.  Learning to Discover Cross-Domain Relations with Generative Adversarial Networks , 2017, ICML.

[35]  Christoph Meinel,et al.  Deep Learning for Medical Image Analysis , 2018, Journal of Pathology Informatics.

[36]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[37]  Bernt Schiele,et al.  Simple Does It: Weakly Supervised Instance and Semantic Segmentation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[38]  Huiqi Li,et al.  Synthesizing retinal and neuronal images with generative adversarial nets , 2018, Medical Image Anal..

[39]  Nenghai Yu,et al.  StyleBank: An Explicit Representation for Neural Image Style Transfer , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[40]  Bram van Ginneken,et al.  A survey on deep learning in medical image analysis , 2017, Medical Image Anal..

[41]  Andrea Vedaldi,et al.  Improved Texture Networks: Maximizing Quality and Diversity in Feed-Forward Stylization and Texture Synthesis , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[42]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[43]  Rob Fergus,et al.  Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks , 2015, NIPS.

[44]  Jan Kautz,et al.  High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[45]  Brendan J. Frey,et al.  Unsupervised image translation , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.