Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images.

PURPOSE Needle-based procedures for diagnosing and treating prostate cancer, such as biopsy and brachytherapy, have incorporated three-dimensional (3D) transrectal ultrasound (TRUS) imaging to improve needle guidance. Using these images effectively typically requires the physician to manually segment the prostate to define the margins used for accurate registration, targeting, and other guidance techniques. However, manual prostate segmentation is a time-consuming and difficult intraoperative process, often occurring while the patient is under sedation (biopsy) or anesthetic (brachytherapy). Minimizing procedure time with a 3D TRUS prostate segmentation method could provide physicians with a quick and accurate prostate segmentation, and allow for an efficient workflow with improved patient throughput to enable faster patient access to care. The purpose of this study was to develop a supervised deep learning-based method to segment the prostate in 3D TRUS images from different facilities, generated using multiple acquisition methods and commercial ultrasound machine models to create a generalizable algorithm for needle-based prostate cancer procedures. METHODS Our proposed method for 3D segmentation involved prediction on 2D slices sampled radially around the approximate central axis of the prostate, followed by reconstruction into a 3D surface. A two-dimensional (2D) U-Net was modified, trained, and validated using images from 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsies and brachytherapy procedures. Modifications to the expansion section of the standard U-Net included the addition of 50% dropouts and the use of transpose convolutions instead of standard upsampling followed by convolution to reduce overfitting and improve performance, respectively. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 end-fire and 20 side-fire unseen 3D TRUS images. Since predicting with 2D images has the potential to lose spatial and structural information, comparisons to 3D reconstruction and optimized 3D networks including 3D V-Net, Dense V-Net, and High-resolution 3D-Net were performed following an investigation into different loss functions. An extended selection of absolute and signed error metrics were computed, including pixel map comparisons (Dice similarity coefficient (DSC), recall, and precision), volume percent differences (VPD), mean surface distance (MSD), and Hausdorff distance (HD), to assess 3D segmentation accuracy. RESULTS Overall, our proposed reconstructed modified U-Net performed with a median [first quartile, third quartile] absolute DSC, recall, precision, VPD, MSD, and HD of 94.1 [92.6, 94.9] %, 96.0 [93.1, 98.5] %, 93.2 [88.8, 95.4] %, 5.78 [2.49, 11.50] %, 0.89 [0.73, 1.09] mm, and 2.89 [2.37, 4.35] mm, respectively. When compared to the best-performing optimized 3D network (i.e., 3D V-Net with a Dice plus cross-entropy loss function), our proposed method performed with a significant improvement across nearly all metrics. A computation time <0.7 s per prostate was observed, which is a sufficiently short segmentation time for intraoperative implementation. CONCLUSIONS Our proposed algorithm was able to provide a fast and accurate 3D segmentation across variable 3D TRUS prostate images, enabling a generalizable intraoperative solution for needle-based prostate cancer procedures. This method has the potential to decrease procedure times, supporting the increasing interest in needle-based 3D TRUS approaches.

[1]  S. Tong,et al.  Intra- and inter-observer variability and reliability of prostate volume measurement via two-dimensional and three-dimensional ultrasound imaging. , 1998, Ultrasound in medicine & biology.

[2]  D. Reich,et al.  Predictors of Hypotension After Induction of General Anesthesia , 2005, Anesthesia and analgesia.

[3]  Jasjit S. Suri,et al.  MRI-ultrasound registration for targeted prostate biopsy , 2009, 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro.

[4]  Y. Choi,et al.  Interobserver variability of transrectal ultrasound for prostate volume measurement according to volume and observer experience. , 2009, AJR. American journal of roentgenology.

[5]  A. Fenster,et al.  Assessment of image registration accuracy in three-dimensional transrectal ultrasound guided prostate biopsy. , 2010, Medical physics.

[6]  A. Fenster,et al.  Three-dimensional ultrasound scanning , 2011, Interface Focus.

[7]  Aaron Fenster,et al.  Efficient 3D Endfiring TRUS Prostate Segmentation with Globally Optimized Rotational Symmetry , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  S. Salcudean,et al.  Semiautomatic segmentation for prostate brachytherapy: dosimetric evaluation. , 2013, Brachytherapy.

[9]  Florian Jung,et al.  Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge , 2014, Medical Image Anal..

[10]  Luciant D. Wolfsberger,et al.  Variability in MRI vs. ultrasound measures of prostate volume and its impact on treatment recommendations for favorable-risk prostate cancer patients: a case series , 2014, Radiation oncology.

[11]  Aaron Fenster,et al.  Prostate Segmentation: An Efficient Convex Optimization Approach With Axial Symmetry Using 3-D TRUS and MR Images , 2014, IEEE Transactions on Medical Imaging.

[12]  A. Fenster,et al.  Rotationally resliced 3D prostate TRUS segmentation using convex optimization with shape priors. , 2015, Medical physics.

[13]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[14]  Toniann Pitassi,et al.  The reusable holdout: Preserving validity in adaptive data analysis , 2015, Science.

[15]  Seyed-Ahmad Ahmadi,et al.  V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation , 2016, 2016 Fourth International Conference on 3D Vision (3DV).

[16]  Sébastien Ourselin,et al.  On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task , 2017, IPMI.

[17]  Purang Abolmaesumi,et al.  A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy , 2018, Medical Image Anal..

[18]  G. Valdes,et al.  Comment on ‘Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study’ , 2018, Physics in medicine and biology.

[19]  Parashkev Nachev,et al.  Computer Methods and Programs in Biomedicine NiftyNet: a deep-learning platform for medical imaging , 2022 .

[20]  Dean C. Barratt,et al.  Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks , 2018, IEEE Transactions on Medical Imaging.

[21]  Yipeng Hu,et al.  Automatic segmentation of prostate MRI using convolutional neural networks: Investigating the impact of network architecture on the accuracy of volume measurement and MRI-ultrasound registration , 2019, Medical Image Anal..

[22]  Nooshin Ghavami,et al.  Integration of spatial information in convolutional neural networks for automatic segmentation of intraoperative transrectal ultrasound images , 2018, Journal of medical imaging.

[23]  Yang Lei,et al.  Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. , 2019, Medical physics.