BACKGROUND
A novel Generative Adversarial Networks (GAN) based bidirectional cross-modality unsupervised domain adaptation (GBCUDA) framework is developed for cardiac image segmentation, which can effectively tackle the problem of network's segmentation performance degradation when adapting to the target domain without ground truth labels.
METHOD
GBCUDA uses GAN for image alignment, applies adversarial learning to extract image features, and gradually enhances the domain invariance of extracted features. The shared encoder performs an end-to-end learning task in which features that differ between the two domains complement each other. The self-attention mechanism is incorporated to the GAN network, which can generate details based on the prompts of all feature positions. Furthermore, spectrum normalization is implemented to stabilize the training of GAN, and knowledge distillation loss is introduced to process high-level feature-maps in order to better complete the cross-mode segmentation task.
RESULTS
The effectiveness of our proposed unsupervised domain adaptation framework is tested over the Multi-Modality Whole Heart Segmentation (MM-WHS) Challenge 2017 dataset. The proposed method is able to improve the average Dice from 74.1% to 81.5% for the four cardiac substructures, and reduce the average symmetric surface distance (ASD) from 7.0 to 5.8 over CT images. For MRI images, our proposed framework trained on CT images gives the average Dice of 59.2% and reduces the average ASD from 5.7 to 4.9.
CONCLUSIONS
The evaluation results demonstrate our method's effectiveness on domain adaptation and the superiority to the current state-of-the-art domain adaptation methods.