Assessing the importance of magnetic resonance contrasts using collaborative generative adversarial networks

A unique advantage of magnetic resonance imaging (MRI) is its mechanism for generating various image contrasts depending on tissue-specific parameters, which provides useful clinical information. Unfortunately, a complete set of MR contrasts is often difficult to obtain in a real clinical environment. Recently, there have been claims that generative models such as generative adversarial networks (GANs) can synthesize MR contrasts that are not acquired. However, the poor scalability of existing GAN-based image synthesis poses a fundamental challenge to understanding the nature of MR contrasts: which contrasts matter, and which cannot be synthesized by generative models? Here, we show that these questions can be addressed systematically by learning the joint manifold of multiple MR contrasts using collaborative generative adversarial networks. Our experimental results show that the exogenous contrast provided by contrast agents is not replaceable, but endogenous contrasts such as T 1 and T 2 can be synthesized from other contrasts. These findings provide important guidance for the acquisition-protocol design of MR in clinical environments. Magnetic resonance scans use different contrast agents to generate different images, each giving specific clinical information. Lee et al. use a collaborative generative model to synthesize some magnetic resonance contrasts from others, providing guidance for how clinical imaging times can be reduced.

[1]  Snehashis Roy,et al.  Classifying magnetic resonance image modalities with convolutional neural networks , 2018, Medical Imaging.

[2]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.

[4]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[5]  Aykut Erdem,et al.  Image Synthesis in Multi-Contrast MRI With Conditional Generative Adversarial Networks , 2018, IEEE Transactions on Medical Imaging.

[6]  Jung-Woo Ha,et al.  StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[7]  Nickolas Papanikolaou,et al.  Imaging Modalities in Brain Tumors , 2011 .

[8]  T. Naidich,et al.  Synthetic MRI for Clinical Neuroimaging: Results of the Magnetic Resonance Image Compilation (MAGiC) Prospective, Multicenter, Multireader Trial , 2017, American Journal of Neuroradiology.

[9]  Jan Kautz,et al.  Loss Functions for Image Restoration With Neural Networks , 2017, IEEE Transactions on Computational Imaging.

[10]  Kaiming He,et al.  Group Normalization , 2018, ECCV.

[11]  Lei Zhang,et al.  Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising , 2016, IEEE Transactions on Image Processing.

[12]  Christos Davatzikos,et al.  Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features , 2017, Scientific Data.

[13]  Jerry L. Prince,et al.  Cross-modality image synthesis from unpaired data using CycleGAN: Effects of gradient consistency loss and training data size , 2018, SASHIMI@MICCAI.

[14]  O. Abe,et al.  SyMRI of the Brain , 2017, Investigative radiology.

[15]  Jong Chul Ye,et al.  CollaGAN: Collaborative GAN for Missing Image Data Imputation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[16]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Anders Eklund,et al.  Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT , 2018, ArXiv.

[18]  Brian B. Avants,et al.  The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) , 2015, IEEE Transactions on Medical Imaging.

[19]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[20]  Raymond Y. K. Lau,et al.  Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[21]  Jelmer M. Wolterink,et al.  Deep MR to CT Synthesis Using Unpaired Data , 2017, SASHIMI@MICCAI.

[22]  L. R. Dice Measures of the Amount of Ecologic Association Between Species , 1945 .

[23]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[24]  Jan Kautz,et al.  Unsupervised Image-to-Image Translation Networks , 2017, NIPS.

[25]  Craig K. Enders,et al.  An introduction to modern missing data analyses. , 2010, Journal of school psychology.

[26]  Enhong Chen,et al.  Image Denoising and Inpainting with Deep Neural Networks , 2012, NIPS.

[27]  Xiaoou Tang,et al.  Image Super-Resolution Using Deep Convolutional Networks , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[29]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[30]  Christian Ledig,et al.  Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[31]  Tomas Pfister,et al.  Learning from Simulated and Unsupervised Images through Adversarial Training , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[32]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[33]  N. Hattori,et al.  Improving the Quality of Synthetic FLAIR Images with Deep Learning Using a Conditional Generative Adversarial Network for Pixel-by-Pixel Image Translation , 2019, American Journal of Neuroradiology.

[34]  Aaron Carass,et al.  Unpaired Brain MR-to-CT Synthesis Using a Structure-Constrained CycleGAN , 2018, DLMIA/ML-CDS@MICCAI.

[35]  Alexei A. Efros,et al.  Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[36]  K. Kumamaru,et al.  Synthetic MRI in the Detection of Multiple Sclerosis Plaques , 2017, American Journal of Neuroradiology.

[37]  Andriy Myronenko,et al.  3D MRI brain tumor segmentation using autoencoder regularization , 2018, BrainLes@MICCAI.