Learning 3D non-rigid deformation based on an unsupervised deep learning for PET/CT image registration

This paper proposes a novel method to learn a 3D non-rigid deformation for automatic image registration between Positron Emission Tomography (PET) and Computed Tomography (CT) scans obtained from the same patient. There are two modules in the proposed scheme including (1) low-resolution displacement vector field (LR-DVF) estimator, which uses a 3D deep convolutional network (ConvNet) to directly estimate the voxel-wise displacement (a 3D vector field) between PET/CT images, and (2) 3D spatial transformer and re-sampler, which warps the PET images to match the anatomical structures in the CT images using the estimated 3D vector field. The parameters of the ConvNet are learned from a number of PET/CT image pairs via an unsupervised learning method. The Normalized Cross Correlation (NCC) between PET/CT images is used as the similarity metric to guide an end-to-end learning process with a constraint (regular term) to preserve the smoothness of the 3D deformations. A dataset with 170 PET/CT scans is used in experiments based on 10-fold cross-validation, where a total of 22,338 3D patches are sampled from the dataset. In each fold, 3D patches from 153 patients (90%) are used for training the parameters, while the remaining whole-body voxels from 17 patients (10%) are used for testing the performance of the image registration. The experimental results demonstrate that the image registration accuracy (the mean value of NCCs) is increased from 0.402 (the initial situation) to 0.567 on PET/CT scans using the proposed scheme. We also compare the performance of our scheme with previous work (DIRNet) and the advantage of our scheme is confirmed via the promising results.

[1]  Gustavo Carneiro,et al.  Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support , 2017, Lecture Notes in Computer Science.

[2]  Xiangrong Zhou,et al.  Normal model construction for statistical image analysis of torso FDG-PET images based on anatomical standardization by CT images from FDG-PET/CT devices , 2017, International Journal of Computer Assisted Radiology and Surgery.

[3]  Xiangrong Zhou,et al.  Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method , 2017, Medical physics.

[4]  Tae Hee Han,et al.  Fast Normalized Cross-Correlation , 2009, Circuits Syst. Signal Process..

[5]  Max A. Viergever,et al.  End-to-End Unsupervised Deformable Image Registration with a Convolutional Neural Network , 2017, DLMIA/ML-CDS@MICCAI.

[6]  Y. Raghavender Rao,et al.  APPLICATION OF NORMALIZED CROSS CORRELATION TO IMAGE REGISTRATION , 2014 .

[7]  Xiangrong Zhou,et al.  Quantitative Analysis of Torso FDG-PET Scans by Using Anatomical Standardization of Normal Cases from Thorough Physical Examinations , 2015, PloS one.

[8]  Yong Fan,et al.  Non-rigid image registration using fully convolutional networks with deep self-supervision , 2017, ArXiv.

[9]  Cyrill Burger,et al.  PET-CT image co-registration in the thorax: influence of respiration , 2002, European Journal of Nuclear Medicine and Molecular Imaging.

[10]  Mert R. Sabuncu,et al.  Unsupervised Learning for Fast Probabilistic Diffeomorphic Registration , 2018, MICCAI.

[11]  Max A. Viergever,et al.  A survey of medical image registration - under review , 2016, Medical Image Anal..

[12]  David R. Haynor,et al.  PET-CT image registration in the chest using free-form deformations , 2003, IEEE Transactions on Medical Imaging.

[13]  Nikos Paragios,et al.  Deformable Medical Image Registration: A Survey , 2013, IEEE Transactions on Medical Imaging.

[14]  Andrew Zisserman,et al.  Spatial Transformer Networks , 2015, NIPS.

[15]  R. Shekhar,et al.  Automated 3-dimensional elastic registration of whole-body PET and CT from separate or combined scanners. , 2005, Journal of nuclear medicine : official publication, Society of Nuclear Medicine.