Estimating 3-dimensional liver motion using deep learning and 2-dimensional ultrasound images

The main purpose of this study is to construct a system to track the tumor position during radiofrequency ablation (RFA) treatment. Existing tumor tracking systems are designed to track a tumor in a two-dimensional (2D) ultrasound (US) image. As a result, the three-dimensional (3D) motion of the organs cannot be accommodated and the ablation area may be lost. In this study, we propose a method for estimating the 3D movement of the liver as a preliminary system for tumor tracking. Additionally, in current 3D movement estimation systems, the motion of different structures during RFA could reduce the tumor visibility in US images. Therefore, we also aim to improve the estimation of the 3D movement of the liver by improving the liver segmentation. We propose a novel approach to estimate the relative 6-axial movement (x, y, z, roll, pitch, and yaw) between the liver and the US probe in order to estimate the overall movement of the liver. We used a convolutional neural network (CNN) to estimate the 3D displacement from two-dimensional US images. In addition, to improve the accuracy of the estimation, we introduced a segmentation map of the liver region as the input for the regression network. Specifically, we improved the extraction accuracy of the liver region by using a bi-directional convolutional LSTM U-Net with densely connected convolutions (BCDU-Net). By using BCDU-Net, the accuracy of the segmentation was dramatically improved, and as a result, the accuracy of the movement estimation was also improved. The mean absolute error for the out-of-plane direction was 0.0645 mm/frame. The experimental results show the effectiveness of our novel method to identify the movement of the liver by BCDU-Net and CNN. Precise segmentation of the liver by BCDU-Net also contributes to enhancing the performance of the liver movement estimation.

[1]  Mahmood Fathy,et al.  Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).

[2]  Sotirios A. Tsaftaris,et al.  Medical Image Computing and Computer Assisted Intervention , 2017 .

[3]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Paolo Fiorini,et al.  A Robust Particle Filtering Approach with Spatially-dependent Template Selection for Medical Ultrasound Tracking Applications , 2016, VISIGRAPP.

[5]  Mehrdad Salehi,et al.  Deep Learning for Sensorless 3D Freehand Ultrasound Imaging , 2017, MICCAI.

[6]  Norihiro Koizumi,et al.  An automatic templates selection method for ultrasound guided tumor tracking , 2017, 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI).

[7]  Yixuan Wang,et al.  Segmentation Guided Regression Network for Breast Cancer Cellularity , 2019, PRCV.

[8]  Naoki Matsumoto,et al.  Out-of-Plane Motion Detection System Using Convolutional Neural Network for US-guided Radiofrequency Ablation Therapy , 2018, 2018 15th International Conference on Ubiquitous Robots (UR).

[9]  Jitendra Malik,et al.  SlowFast Networks for Video Recognition , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[10]  Sanyuan Zhao,et al.  Pyramid Dilated Deeper ConvLSTM for Video Salient Object Detection , 2018, ECCV.

[11]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.