Multi-task learning for ultrasound image formation and segmentation directly from raw in vivo data

Deep neural networks have demonstrated potential to both create images and segment structures of interest directly from raw ultrasound data in one step, through an end-to-end transformation. Building on previous work from our group, subaperture beamformed IQ data from in vivo breast cyst data was the input to a custom network that outputs parallel B-mode and cyst segmentation images. Our new model includes bright point and line targets during training to overcome the limited field of view challenges encountered with our previous deep learning models, which were purely trained using simulations of cysts and homogeneous tissue structures. This new network resulted in cyst contrast values of $-33.07\pm 10.79\ \text{dB}, -32.09\pm 0.04 \text{dB}$, and $-15.95\pm 12.04\ \text{dB}$ for simulated, phantom, and in vivo data, respectively, which is an improvement over the contrast of corresponding delay and sum (DAS) images (i.e., $-17.37\pm 6.06\ \text{dB},\ -17.14\pm 0.16\ \text{dB}$, and $14.80\pm 1.30\ \text{dB}$ for simulated, phantom, and in vivo, respectively). Higher dice similarity coefficients (DSCs) were obtained with in vivo data with the new network (.83 \pm 0.01$) when compared to our previous model (.63\pm 0.03$), and fewer false positives were encountered. This work demonstrates the feasibility of using multi-task learning to simultaneously form a B-mode image and cyst segmentation with a wider field of view that is appropriate for in vivo breast imaging. These results have promising implications for multiple tasks, including emphasizing or de-emphasizing structures of interest for diagnostic, interventional, automated, and semi-automated decision making.

[1]  J. Arendt Paper presented at the 10th Nordic-Baltic Conference on Biomedical Imaging: Field: A Program for Simulating Ultrasound Systems , 1996 .

[2]  K. Boone,et al.  Effect of skin impedance on image quality and variability in electrical impedance tomography: a model study , 1996, Medical and Biological Engineering and Computing.

[3]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[4]  Muyinatu A. Lediju Bell,et al.  One-Step Deep Learning Approach to Ultrasound Image Formation and Image Segmentation with a Fully Convolutional Neural Network , 2019, 2019 IEEE International Ultrasonics Symposium (IUS).

[5]  K. Kelly,et al.  Breast cancer detection using automated whole breast ultrasound and mammography in radiographically dense breasts , 2009, European Radiology.

[6]  Alycen Wiacek,et al.  CohereNet: A Deep Learning Architecture for Ultrasound Spatial Correlation Estimation and Coherence-Based Beamforming , 2020, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control.

[7]  Muyinatu A. Lediju Bell,et al.  Deep Learning to Obtain Simultaneous Image and Segmentation Outputs From a Single Input of Raw Ultrasound Channel Data , 2020, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control.

[8]  Austin Reiter,et al.  A Deep Learning Based Alternative to Beamforming Ultrasound Images , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[9]  H. Torp,et al.  The Generalized Contrast-to-Noise Ratio: A Formal Definition for Lesion Detectability , 2019, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control.

[10]  Ron Kikinis,et al.  Statistical validation of image segmentation quality based on a spatial overlap index. , 2004, Academic radiology.

[11]  Sebastian Ruder,et al.  An Overview of Multi-Task Learning in Deep Neural Networks , 2017, ArXiv.

[12]  Qiang Yang,et al.  An Overview of Multi-task Learning , 2018 .

[13]  J. Jensen,et al.  Calculation of pressure fields from arbitrarily shaped, apodized, and excited ultrasound transducers , 1992, IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control.