UDCT: Unsupervised data to content transformation with histogram-matching cycle-consistent generative adversarial networks

The segmentation of images is a common task in a broad range of research fields. To tackle increasingly complex images, artificial intelligence (AI) based approaches have emerged to overcome the shortcomings of traditional feature detection methods. Owing to the fact that most AI research is made publicly accessible and programming the required algorithms is now possible in many popular languages, the use of such approaches is becoming widespread. However, these methods often require data labeled by the researcher to provide a training target for the algorithms to converge to the desired result. This labeling is a limiting factor in many cases and can become prohibitively time consuming. Inspired by Cycle-consistent Generative Adversarial Networks’ (cycleGAN) ability to perform style transfer, we outline a method whereby a computer generated set of images is used to segment the true images. We benchmark our unsupervised approach against a state-of-the-art supervised cell-counting network on the VGG Cells dataset and show that it is not only competitive but can also precisely locate individual cells. We demonstrate the power of this method by segmenting bright-field images of cell cultures, a live-dead assay of C.Elegans and X-ray-computed tomography of metallic nanowire meshes.

[1]  Yoshua Bengio,et al.  How transferable are features in deep neural networks? , 2014, NIPS.

[2]  Timo Aila,et al.  A Style-Based Generator Architecture for Generative Adversarial Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Anne E Carpenter,et al.  High-throughput screen for novel antimicrobials using a whole animal infection model. , 2009, ACS chemical biology.

[4]  Lin Yang,et al.  Translating and Segmenting Multimodal Medical Volumes with Cycle- and Shape-Consistency Generative Adversarial Network , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[5]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[6]  Thomas Brox,et al.  3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation , 2016, MICCAI.

[7]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[8]  Polina Golland,et al.  An image analysis toolbox for high-throughput C. elegans assays , 2012, Nature Methods.

[9]  Edward J. Delp,et al.  Three Dimensional Fluorescence Microscopy Image Synthesis and Segmentation , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[10]  Md. Kamrul Hasan,et al.  Lung Cancer Tumor Region Segmentation Using Recurrent 3D-DenseUNet , 2020, TIA@MICCAI.

[11]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[12]  Andrea Vedaldi,et al.  Instance Normalization: The Missing Ingredient for Fast Stylization , 2016, ArXiv.

[13]  Roberto Cipolla,et al.  SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  Thomas Brox,et al.  U-Net: deep learning for cell counting, detection, and morphometry , 2018, Nature Methods.

[15]  Richard J. Chen,et al.  Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images , 2018, IEEE Transactions on Medical Imaging.

[16]  Lei Zheng,et al.  Instance Segmentation of Fibers from Low Resolution CT Scans via 3D Deep Embedding Learning , 2019, BMVC.

[17]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  R. Abela,et al.  Trends in synchrotron-based tomographic imaging: the SLS experience , 2006, SPIE Optics + Photonics.

[19]  Shunxing Bao,et al.  Adversarial synthesis learning enables segmentation without target modality ground truth , 2017, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018).

[20]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[21]  Fred Wolf,et al.  Automated Segmentation of Epithelial Tissue Using Cycle-Consistent Generative Adversarial Networks , 2018, bioRxiv.

[22]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.

[23]  Andrew Zisserman,et al.  Learning To Count Objects in Images , 2010, NIPS.

[24]  Connelly Barnes,et al.  Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses , 2017, ArXiv.

[25]  Marco Stampanoni,et al.  Phase-contrast tomography at the nanoscale using hard x rays , 2010 .

[26]  Anne E Carpenter,et al.  Annotated high-throughput microscopy image sets for validation , 2012, Nature Methods.

[27]  Qianni Zhang,et al.  GAN-based Virtual Re-Staining: A Promising Solution for Whole Slide Image Analysis , 2019, ArXiv.

[28]  Hao Chen,et al.  Semantic-Aware Generative Adversarial Nets for Unsupervised Domain Adaptation in Chest X-ray Segmentation , 2018, MLMI@MICCAI.

[29]  Yoshua Bengio,et al.  Count-ception: Counting by Fully Convolutional Redundant Counting , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).

[30]  Yoshua Bengio,et al.  The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).