Conditional Segmentation in Lieu of Image Registration

Classical pairwise image registration methods search for a spatial transformation that optimises a numerical measure that indicates how well a pair of moving and fixed images are aligned. Current learning-based registration methods have adopted the same paradigm and typically predict, for any new input image pair, dense correspondences in the form of a dense displacement field or parameters of a spatial transformation model. However, in many applications of registration, the spatial transformation itself is only required to propagate points or regions of interest (ROIs). In such cases, detailed pixel- or voxel-level correspondence within or outside of these ROIs often have little clinical value. In this paper, we propose an alternative paradigm in which the location of corresponding image-specific ROIs, defined in one image, within another image is learnt. This results in replacing image registration by a conditional segmentation algorithm, which can build on typical image segmentation networks and their widely-adopted training strategies. Using the registration of 3D MRI and ultrasound images of the prostate as an example to demonstrate this new approach, we report a median target registration error (TRE) of 2.1 mm between the ground-truth ROIs defined on intraoperative ultrasound images and those propagated from the preoperative MR images. Significantly lower (>34%) TREs were obtained using the proposed conditional segmentation compared with those obtained from a previously-proposed spatial-transformation-predicting registration network trained with the same multiple ROI labels for individual image pairs. We conclude this work by using a quantitative bias-variance analysis to provide one explanation of the observed improvement in registration accuracy.

[1]  Max A. Viergever,et al.  A deep learning framework for unsupervised affine and deformable image registration , 2018, Medical Image Anal..

[2]  Ah Chung Tsoi,et al.  Neural Network Classification and Prior Class Probabilities , 1996, Neural Networks: Tricks of the Trade.

[3]  Mert R. Sabuncu,et al.  VoxelMorph: A Learning Framework for Deformable Medical Image Registration , 2018, IEEE Transactions on Medical Imaging.

[4]  Parashkev Nachev,et al.  Computer Methods and Programs in Biomedicine NiftyNet: a deep-learning platform for medical imaging , 2022 .

[5]  Geoffrey I. Webb,et al.  Estimating bias and variance from data , 2003 .

[6]  Tomasz Malisiewicz,et al.  Deep Image Homography Estimation , 2016, ArXiv.

[7]  Mark Emberton,et al.  New and Established Technology in Focal Ablation of the Prostate: A Systematic Review. , 2017, European urology.

[8]  Stefan Heldmann,et al.  Enhancing Label-Driven Deep Deformable Image Registration with Local Distance Metrics for State-of-the-Art Cardiac Motion Tracking , 2018, Bildverarbeitung für die Medizin.

[9]  Dean C. Barratt,et al.  Adversarial Deformation Regularization for Training Image Registration Neural Networks , 2018, MICCAI.

[10]  Baris Turkbey,et al.  Comparison of MR/ultrasound fusion-guided biopsy with ultrasound-guided biopsy for the diagnosis of prostate cancer. , 2015, JAMA.

[11]  Mitko Veta,et al.  Deformable image registration using convolutional neural networks , 2018, Medical Imaging.

[12]  Geoffrey I. Webb,et al.  MultiBoosting: A Technique for Combining Boosting and Wagging , 2000, Machine Learning.

[13]  Sébastien Ourselin,et al.  Weakly-supervised convolutional neural networks for multimodal image registration , 2018, Medical Image Anal..

[14]  Maxime Sermesant,et al.  SVF-Net: Learning Deformable Image Registration Using Shape Matching , 2017, MICCAI.

[15]  Thomas Brox,et al.  3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation , 2016, MICCAI.