C-MADA: unsupervised cross-modality adversarial domain adaptation framework for medical image segmentation

Deep learning models have obtained state-of-the-art results for medical image analysis. However, when these models are tested on an unseen domain there is a significant performance degradation. In this work, we present an unsupervised Cross-Modality Adversarial Domain Adaptation (C-MADA) framework for medical image segmentation. C-MADA implements an imageand feature-level adaptation method in a sequential manner. First, images from the source domain are translated to the target domain through an unpaired image-toimage adversarial translation with cycle-consistency loss. Then, a U-Net network is trained with the mapped source domain images and target domain images in an adversarial manner to learn domain-invariant feature representations. Furthermore, to improve the network ́s segmentation performance, information about the shape, texture, and contour of the predicted segmentation is included during the adversarial training. C-MADA is tested on the task of brain MRI segmentation, obtaining competitive results.