Changing the Contrast of Magnetic Resonance Imaging Signals using Deep Learning

The contrast settings to select before acquiring magnetic resonance imaging (MRI) signal depend heavily on the subsequent tasks. As each contrast highlights different tissues, automated segmentation tools for example might be optimized for a certain contrast. Unfortunately, the optimal contrast for the subsequent automated methods might not be known during the time of signal acquisition, and performing multiple scans with different contrasts increases the total examination time and registering the sequences introduces extra work and potential errors. Building on the recent achievements of deep learning in medical applications, the presented work describes a novel approach for transferring any contrast to any other. The novel model architecture incorporates the signal equation for spin echo sequences, and hence the model inherently learns the unknown quantitative maps for proton density, T1 and T2 relaxation times. This grants the model the ability to retrospectively reconstruct spin echo sequences by changing the contrast settings Echo and Repetition Times. The model learns to identify the contrast of pelvic MR images, therefore no paired data of the same anatomy from different contrasts is required for training. This means that the experiments are easily reproducible with other contrasts or other patient anatomies. Despite the contrast of the input image, the model achieves accurate results for reconstructing signal with contrasts available for evaluation. For the same anatomy, the quantitative maps are consistent for a range of contrasts of input images. Realized in practice, the proposed method would greatly simplify the modern radiotherapy pipeline. The trained model is made public together with a tool for testing the model on example images.

[1]  Bruce R. Rosen,et al.  Image reconstruction by domain-transform manifold learning , 2017, Nature.

[2]  Alain Lalande,et al.  What are normal relaxation times of tissues at 3 T? , 2017, Magnetic resonance imaging.

[3]  M. Bronskill,et al.  T1, T2 relaxation and magnetization transfer in tissue at 3T , 2005, Magnetic resonance in medicine.

[4]  Rafael C. González,et al.  Digital image processing, 3rd Edition , 2008 .

[5]  Timothy Dozat,et al.  Incorporating Nesterov Momentum into Adam , 2016 .

[6]  Anders Eklund,et al.  Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT , 2018, ArXiv.

[7]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[8]  Jelmer M. Wolterink,et al.  Deep MR to CT Synthesis Using Unpaired Data , 2017, SASHIMI@MICCAI.

[9]  Debiao Li,et al.  Efficient and Accurate MRI Super-Resolution using a Generative Adversarial Network and 3D Multi-Level Densely Connected Network , 2018, MICCAI.

[10]  Pascal Vincent,et al.  fastMRI: An Open Dataset and Benchmarks for Accelerated MRI , 2018, ArXiv.

[11]  J. Jonsson,et al.  Counterpoint: Opportunities and challenges of a magnetic resonance imaging-only radiotherapy work flow. , 2014, Seminars in radiation oncology.

[12]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[13]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.