FIRE: Unsupervised bi-directional inter- and intra-modality registration using deep networks

Magnetic resonance imaging (MRI) benefits from the acquisition of multiple sequences (thereafter, referred to as “modalities”) under a single imaging session. Each modality offers different complementary spatial and functional information in the clinical setting. Inter- and intra (across MR sequence slices)-modality image registration is an important pre-processing step across multiple applications in routine clinical workflows, such as when visual or quantitative imaging biomarkers need to be assessed across multi-sequence/multi-slice MRI data. This paper presents an unsupervised deep learning-based registration network that can learn affine and non-rigid transformations, simultaneously. Inverse-consistency is an important property that is commonly ignored in recent deep learning-based inter-modality registration algorithms. We address this issue through our proposed multi-task, cross-domain image synthesis architecture, in which we incorporated a new comprehensive transformation network. The proposed model learns a modality-independent latent representation to perform cycle-consistent cross-modality synthesis and uses an inverse-consistency loss to learn paired transformations, to align the synthesized with the target image. We name this proposed framework as “FIRE” due to the shape of its structure and we focus on interpreting model components to enhance model interpretability for clinical MR applications. Our method shows comparable and better performances against a well-established baseline method in experiments on multi-sequence brain MR data and intra-modality 4D cardiac Cine-MR data.