Learning Multi-Modal Volumetric Prostate Registration With Weak Inter-Subject Spatial Correspondence

Recent studies demonstrated the eligibility of convolutional neural networks (CNNs) for solving the image registration problem. CNNs enable faster transformation estimation and greater generalization capability needed for better support during medical interventions. Conventional fully-supervised training requires a lot of high-quality ground truth data such as voxel-to-voxel transformations, which typically are attained in a too tedious and error-prone manner. In our work, we use weakly-supervised learning, which optimizes the model indirectly only via segmentation masks that are a more accessible ground truth than the deformation fields. Concerning the weak supervision, we investigate two segmentation similarity measures: multiscale Dice similarity coefficient (mDSC) and the similarity between segmentation-derived signed distance maps (SDMs). We show that the combination of mDSC and SDM similarity measures results in a more accurate and natural transformation pattern together with a stronger gradient coverage. Furthermore, we introduce an auxiliary input to the neural network for the prior information about the prostate location in the MR sequence, which mostly is available preoperatively. This approach significantly outperforms the standard two-input models. With weakly labelled MR-TRUS prostate data, we showed registration quality comparable to the state-of-the-art deep learning-based method.

[1]  Sébastien Ourselin,et al.  Weakly-supervised convolutional neural networks for multimodal image registration , 2018, Medical Image Anal..

[2]  Daniel Rueckert,et al.  Nonrigid registration using free-form deformations: application to breast MR images , 1999, IEEE Transactions on Medical Imaging.

[3]  Parashkev Nachev,et al.  Computer Methods and Programs in Biomedicine NiftyNet: a deep-learning platform for medical imaging , 2022 .

[4]  Fabio Augusto Menocci Cappabianco,et al.  A Practical Review on Medical Image Registration: From Rigid to Deep Learning Based Approaches , 2018, 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI).

[5]  Sheng Xu,et al.  Learning deep similarity metric for 3D MR–TRUS image registration , 2018, International Journal of Computer Assisted Radiology and Surgery.

[6]  Milan Sonka,et al.  3D Slicer as an image computing platform for the Quantitative Imaging Network. , 2012, Magnetic resonance imaging.

[7]  Hervé Delingette,et al.  Robust Non-rigid Registration Through Agent-Based Action Learning , 2017, MICCAI.

[8]  Sebastian Stober,et al.  Automatic prostate and prostate zones segmentation of magnetic resonance images using DenseNet-like U-net , 2020, Scientific Reports.

[9]  Matthew R. Cooperberg,et al.  Epidemiology of prostate cancer , 2017, World Journal of Urology.

[10]  Xin Yang,et al.  Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound , 2019, IEEE Transactions on Medical Imaging.

[11]  Purang Abolmaesumi,et al.  Automatic high resolution segmentation of the prostate from multi-planar MRI , 2018, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018).

[12]  Thomas Brox,et al.  3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation , 2016, MICCAI.