T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks

Background The objective of this study was to propose an optimal input image quality for a conditional generative adversarial network (GAN) in T1-weighted and T2-weighted magnetic resonance imaging (MRI) images. Materials and methods A total of 2,024 images scanned from 2017 to 2018 in 104 patients were used. The prediction framework of T1-weighted to T2-weighted MRI images and T2-weighted to T1-weighted MRI images were created with GAN. Two image sizes (512 × 512 and 256 × 256) and two grayscale level conversion method (simple and adaptive) were used for the input images. The images were converted from 16-bit to 8-bit by dividing with 256 levels in a simple conversion method. For the adaptive conversion method, the unused levels were eliminated in 16-bit images, which were converted to 8-bit images by dividing with the value obtained after dividing the maximum pixel value with 256. Results The relative mean absolute error (rMAE ) was 0.15 for T1-weighted to T2-weighted MRI images and 0.17 for T2-weighted to T1-weighted MRI images with an adaptive conversion method, which was the smallest. Moreover, the adaptive conversion method has a smallest mean square error (rMSE) and root mean square error (rRMSE), and the largest peak signal-to-noise ratio (PSNR) and mutual information (MI). The computation time depended on the image size. Conclusions Input resolution and image size affect the accuracy of prediction. The proposed model and approach of prediction framework can help improve the versatility and quality of multi-contrast MRI tests without the need for prolonged examinations.

[1]  Ming Dong,et al.  Generating synthetic CTs from magnetic resonance images using generative adversarial networks , 2018, Medical physics.

[2]  Xiao Han,et al.  MR‐based synthetic CT generation using a deep convolutional neural network method , 2017, Medical physics.

[3]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[4]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[5]  Wei Shen,et al.  Multi-scale Convolutional Neural Networks for Lung Nodule Classification , 2015, IPMI.

[6]  Dima Damen,et al.  Recognizing linked events: Searching the space of feasible explanations , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Snehashis Roy,et al.  Magnetic resonance image synthesis through patch regression , 2013, 2013 IEEE 10th International Symposium on Biomedical Imaging.

[8]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Nico Karssemeijer,et al.  Large scale deep learning for computer aided detection of mammographic lesions , 2017, Medical Image Anal..

[10]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Jeffrey Tsao,et al.  Ultrafast imaging: Principles, pitfalls, solutions, and applications , 2010, Journal of magnetic resonance imaging : JMRI.

[12]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[13]  P Mansfield,et al.  REAL-TIME NUCLEAR MAGNETIC RESONANCE CLINICAL IMAGING IN PAEDIATRICS , 1983, The Lancet.

[14]  Dinggang Shen,et al.  Medical Image Synthesis with Deep Convolutional Adversarial Networks , 2018, IEEE Transactions on Biomedical Engineering.

[15]  Xiangrong Zhou,et al.  Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method , 2017, Medical physics.

[16]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[17]  Nannan Li,et al.  MRI Cross-Modality NeuroImage-to-NeuroImage Translation , 2018, 1801.06940.

[18]  Su Ruan,et al.  Medical Image Synthesis with Context-Aware Generative Adversarial Networks , 2016, MICCAI.