Latent Space Manipulation for High-Resolution Medical Image Synthesis via the StyleGAN.

INTRODUCTION This paper explores the potential of the StyleGAN model as an high-resolution image generator for synthetic medical images. The possibility to generate sample patient images of different modalities can be helpful for training deep learning algorithms as e.g. a data augmentation technique. METHODS The StyleGAN model was trained on Computed Tomography (CT) and T2- weighted Magnetic Resonance (MR) images from 100 patients with pelvic malignancies. The resulting model was investigated with regards to three features: Image Modality, Sex, and Longitudinal Slice Position. Further, the style transfer feature of the StyleGAN was used to move images between the modalities. The root-mean-squard error (RMSE) and the Mean Absolute Error (MAE) were used to quantify errors for MR and CT, respectively. RESULTS We demonstrate how these features can be transformed by manipulating the latent style vectors, and attempt to quantify how the errors change as we move through the latent style space. The best results were achieved by using the style transfer feature of the StyleGAN (58.7 HU MAE for MR to CT and 0.339 RMSE for CT to MR). Slices below and above an initial central slice can be predicted with an error below 75 HU MAE and 0.3 RMSE within 4cm for CT and MR, respectively. DISCUSSION The StyleGAN is a promising model to use for generating synthetic medical images for MR and CT modalities as well as for 3D volumes.

[1]  Berkman Sahiner,et al.  Deep learning in medical imaging and radiation therapy. , 2018, Medical physics.

[2]  Serge J. Belongie,et al.  Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[3]  Dwarikanath Mahapatra,et al.  Image super-resolution using progressive generative adversarial networks for medical image analysis , 2019, Comput. Medical Imaging Graph..

[4]  Anne E Carpenter,et al.  Opportunities and obstacles for deep learning in biology and medicine , 2017, bioRxiv.

[5]  Yaozong Gao,et al.  Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks , 2016, LABELS/DLMIA@MICCAI.

[6]  T. Nyholm,et al.  A review of substitute CT generation for MRI-only radiation therapy , 2017, Radiation oncology.

[7]  Dinggang Shen,et al.  Semi-supervised learning for pelvic MR image segmentation based on multi-task residual fully convolutional networks , 2018, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018).

[8]  Tiina Seppälä,et al.  A dual model HU conversion from MRI intensity values within and outside of bone segment for MRI-based radiotherapy treatment planning of prostate cancer. , 2013, Medical physics.

[9]  Aaron C. Courville,et al.  Improved Training of Wasserstein GANs , 2017, NIPS.

[10]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[11]  Peter Wonka,et al.  Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space? , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[12]  Alejandro F. Frangi,et al.  Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment , 2019, IEEE Transactions on Medical Imaging.

[13]  Sebastian Nowozin,et al.  Which Training Methods for GANs do actually Converge? , 2018, ICML.

[14]  M. Maspero,et al.  Fast synthetic CT generation with deep learning for general pelvis MR-only Radiotherapy , 2018 .

[15]  Arvid Lundervold,et al.  An overview of deep learning in medical imaging focusing on MRI , 2018, Zeitschrift fur medizinische Physik.

[16]  Neil J. Joshi,et al.  Assessment of Deep Generative Models for High-Resolution Synthetic Retinal Image Generation of Age-Related Macular Degeneration , 2019, JAMA ophthalmology.

[17]  Hayit Greenspan,et al.  GAN-based Synthetic Medical Image Augmentation for increased CNN Performance in Liver Lesion Classification , 2018, Neurocomputing.

[18]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[19]  Alena-Kathrin Schnurr,et al.  Simulation-based deep artifact correction with Convolutional Neural Networks for limited angle artifacts. , 2019, Zeitschrift fur medizinische Physik.

[20]  Alena-Kathrin Schnurr,et al.  Synthesis of CT images from digital body phantoms using CycleGAN , 2019, International Journal of Computer Assisted Radiology and Surgery.

[21]  Jelmer M. Wolterink,et al.  Deep MR to CT Synthesis Using Unpaired Data , 2017, SASHIMI@MICCAI.

[22]  Christian Riess,et al.  A Gentle Introduction to Deep Learning in Medical Image Processing , 2018, Zeitschrift fur medizinische Physik.

[23]  Jaakko Lehtinen,et al.  Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.

[24]  Daniel Güllmar,et al.  Analysis of intensity normalization for optimal segmentation performance of a fully convolutional neural network. , 2019, Zeitschrift fur medizinische Physik.

[25]  Andrea Vedaldi,et al.  Instance Normalization: The Missing Ingredient for Fast Stylization , 2016, ArXiv.

[26]  Ming Dong,et al.  Generating synthetic CTs from magnetic resonance images using generative adversarial networks , 2018, Medical physics.

[27]  Bolei Zhou,et al.  Interpreting the Latent Space of GANs for Semantic Face Editing , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[28]  Liyuan Liu,et al.  On the Variance of the Adaptive Learning Rate and Beyond , 2019, ICLR.

[29]  Sepp Hochreiter,et al.  GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.

[30]  Qian Wang,et al.  Deep embedding convolutional neural network for synthesizing CT image from T1‐Weighted MR image , 2018, Medical Image Anal..

[31]  Nassir Navab,et al.  GANs for Medical Image Analysis , 2018, Artif. Intell. Medicine.

[32]  Andrew P. Leynes,et al.  Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI Attenuation Correction Using Deep Convolutional Neural Networks with Multiparametric MRI , 2017, The Journal of Nuclear Medicine.