Attention for Image Registration (AiR): an unsupervised Transformer approach

Image registration as an important basis in signal processing task often encounter the problem of stability and efficiency. Non-learning registration approaches rely on the optimization of the similarity metrics between the fix and moving images. Yet, those approaches are usually costly in both time and space complexity. The problem can be worse when the size of the image is large or the deformations between the images are severe. Recently, deep learning, or precisely saying, the convolutional neural network (CNN) based image registration methods have been widely investigated in the research community and show promising effectiveness to overcome the weakness of non-learning based methods. To explore the advanced learning approaches in image registration problem for solving practical issues, we present in this paper a method of introducing attention mechanism in deformable image registration problem. The proposed approach is based on learning the deformation field with a Transformer framework (AiR) that does not rely on the CNN but can be efficiently trained on GPGPU devices also. In a more vivid interpretation: we treat the image registration problem as the same as a language translation task and introducing a Transformer to tackle the problem. Our method learns an unsupervised generated deformation map and is tested on two benchmark datasets. The source code of the AiR will be released at Gitlab. 3

[1]  Max A. Viergever,et al.  A deep learning framework for unsupervised affine and deformable image registration , 2018, Medical Image Anal..

[2]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[3]  Roland Vollgraf,et al.  Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.

[4]  Edouard Grave,et al.  Depth-Adaptive Transformer , 2020, ICLR.

[5]  Andrew Zisserman,et al.  Spatial Transformer Networks , 2015, NIPS.

[6]  Franccois Fleuret,et al.  Fast Transformers with Clustered Attention , 2020, NeurIPS.

[7]  Dinggang Shen,et al.  Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning , 2017, Deep Learning for Medical Image Analysis.

[8]  Guillaume Charpiat,et al.  Multimodal Image Alignment Through a Multiscale Chain of Neural Networks with Application to Remote Sensing , 2018, ECCV.

[9]  Georg Heigold,et al.  An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2021, ICLR.

[10]  Nikos Paragios,et al.  Deformable Medical Image Registration: A Survey , 2013, IEEE Transactions on Medical Imaging.

[11]  Qian Wang,et al.  Deformable Image Registration Based on Similarity-Steered CNN Regression , 2017, MICCAI.

[12]  Enhua Wu,et al.  Transformer in Transformer , 2021, NeurIPS.

[13]  Deriving Velocity Fields of Submesoscale Eddies Using Multi-Sensor Imagery , 2020, IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium.

[14]  Hervé Delingette,et al.  Learning a Probabilistic Model for Diffeomorphic Registration , 2018, IEEE Transactions on Medical Imaging.

[15]  Stephen Lin,et al.  Swin Transformer: Hierarchical Vision Transformer using Shifted Windows , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[16]  Nicolas Usunier,et al.  End-to-End Object Detection with Transformers , 2020, ECCV.

[17]  Shuicheng Yan,et al.  Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet , 2021, ArXiv.

[18]  Dinggang Shen,et al.  Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning , 2016, IEEE Transactions on Biomedical Engineering.

[19]  João Manuel R S Tavares,et al.  Medical image registration: a review , 2014, Computer methods in biomechanics and biomedical engineering.