Unpaired Mr to CT Synthesis with Explicit Structural Constrained Adversarial Learning

In medical imaging such as PET-MR attenuation correction and MRI-guided radiation therapy, synthesizing CT images from MR plays an important role in obtaining tissue density properties. Recently deep-learning-based image synthesis techniques have attracted much attention because of their superior ability for image mapping. However, most of the current deep-learning-based synthesis methods require large scales of paired data, which greatly limits their usage. Efforts have been made to relax such a restriction, and the cycle-consistent adversarial networks (Cycle-GAN) is an example to synthesize medical images with unpaired data. In Cycle-GAN, the cycle consistency loss is employed as an indirect structural similarity metric between the input and the synthesized images and often leads to mismatch of anatomical structures in the synthesized results. To overcome this shortcoming, we propose to (1) use the mutual information loss to directly enforce the structural similarity between the input MR and the synthesized CT image and (2) to incorporate the shape consistency information to improve the synthesis result. Experimental results demonstrate that the proposed method can achieve better performance both qualitatively and quantitatively for whole-body MR to CT synthesis with unpaired training images compared to Cycle-GAN.