Multi-structure Segmentation from Partially Labeled Datasets. Application to Body Composition Measurements on CT Scans

Labeled data is the current bottleneck of medical image research. Substantial efforts are made to generate segmentation masks to characterize a given organ. The community ends up with multiple label maps of individual structures in different cases, not suitable for current multi-organ segmentation frameworks. Our objective is to leverage segmentations from multiple organs in different cases to generate a robust multi-organ deep learning segmentation network. We propose a modified cost-function that takes into account only the voxels labeled in the image, ignoring unlabeled structures. We evaluate the proposed methodology in the context of pectoralis muscle and subcutaneous fat segmentation on chest CT scans. Six different structures are segmented from an axial slice centered on the transversal aorta. We compare the performance of a network trained on 3,000 images where only one structure has been annotated (PUNet) against six UNets (one per structure) and a multi-class UNet trained on 500 completely annotated images, showing equivalence between the three methods (Dice coefficients of 0.909, 0.906 and 0.909 respectively). We further propose a modification of the architecture by adding convolutions to the skip connections (CUNet). When trained with partially labeled images, it outperforms statistically significantly the other three methods (Dice 0.916, p< 0.0001). We, therefore, show that (a) when keeping the number of organ annotation constant, training with partially labeled images is equivalent to training with wholly labeled data and (b) adding convolutions in the skip connections improves performance.

[1]  Seyed-Ahmad Ahmadi,et al.  V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation , 2016, 2016 Fourth International Conference on 3D Vision (3DV).

[2]  Le Lu,et al.  Improving Deep Pancreas Segmentation in CT and MRI Images via Recurrent Neural Contextual Learning and Direct Loss Function , 2017, ArXiv.

[3]  Raúl San José Estépar,et al.  Lower Pectoralis Muscle Area Is Associated with a Worse Overall Survival in Non–Small Cell Lung Cancer , 2016, Cancer Epidemiology, Biomarkers & Prevention.

[4]  Raúl San José Estépar,et al.  Quantitative computed tomography measures of pectoralis muscle area and disease severity in chronic obstructive pulmonary disease. A cross-sectional study. , 2014, Annals of the American Thoracic Society.

[5]  Raúl San José Estépar,et al.  Pectoralis Muscle Segmentation on CT Images Based on Bayesian Graph Cuts with a Subject-Tailored Atlas , 2014, MCV.

[6]  Sébastien Ourselin,et al.  Generalised Wasserstein Dice Score for Imbalanced Multi-class Segmentation using Holistic Convolutional Networks , 2017, BrainLes@MICCAI.

[7]  Sébastien Ourselin,et al.  Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations , 2017, DLMIA/ML-CDS@MICCAI.

[8]  Thomas Brox,et al.  3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation , 2016, MICCAI.

[9]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[10]  E. Regan,et al.  Genetic Epidemiology of COPD (COPDGene) Study Design , 2011, COPD.

[11]  Sébastien Ourselin,et al.  Scalable multimodal convolutional networks for brain tumour segmentation , 2017, MICCAI.

[12]  Christopher Joseph Pal,et al.  Learning normalized inputs for iterative estimation in medical image segmentation , 2017, Medical Image Anal..

[13]  Christopher Joseph Pal,et al.  The Importance of Skip Connections in Biomedical Image Segmentation , 2016, LABELS/DLMIA@MICCAI.