Unsupervised data to content transformation with histogram-matching cycle-consistent generative adversarial networks

The segmentation of images is a common task in a broad range of research fields. To tackle increasingly complex images, artificial intelligence-based approaches have emerged to overcome the shortcomings of traditional feature detection methods. Owing to the fact that most artificial intelligence research is made publicly accessible and programming the required algorithms is now possible in many popular languages, the use of such approaches is becoming widespread. However, these methods often require data labelled by the researcher to provide a training target for the algorithms to converge to the desired result. This labelling is a limiting factor in many cases and can become prohibitively time consuming. Inspired by the ability of cycle-consistent generative adversarial networks to perform style transfer, we outline a method whereby a computer-generated set of images is used to segment the true images. We benchmark our unsupervised approach against a state-of-the-art supervised cell-counting network on the VGG Cells dataset and show that it is not only competitive but also able to precisely locate individual cells. We demonstrate the power of this method by segmenting bright-field images of cell cultures, images from a live/dead assay of C. elegans, and X-ray computed tomography of metallic nanowire meshes.Labelling training data to train machine learning models is very time intense. A new method shows that content transformation can be effectively learned from generated data, avoiding the need for any manual labelling in segmentation and classification tasks.

[1]  Shunxing Bao,et al.  Adversarial synthesis learning enables segmentation without target modality ground truth , 2017, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018).

[2]  Hao Chen,et al.  Semantic-Aware Generative Adversarial Nets for Unsupervised Domain Adaptation in Chest X-ray Segmentation , 2018, MLMI@MICCAI.

[3]  Alexei A. Efros,et al.  Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[4]  Fred Wolf,et al.  Automated Segmentation of Epithelial Tissue Using Cycle-Consistent Generative Adversarial Networks , 2018, bioRxiv.

[5]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.

[6]  Anne E Carpenter,et al.  Annotated high-throughput microscopy image sets for validation , 2012, Nature Methods.

[7]  Richard J. Chen,et al.  Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images , 2018, IEEE Transactions on Medical Imaging.

[8]  Anne E Carpenter,et al.  High-throughput screen for novel antimicrobials using a whole animal infection model. , 2009, ACS chemical biology.

[9]  Lin Yang,et al.  Translating and Segmenting Multimodal Medical Volumes with Cycle- and Shape-Consistency Generative Adversarial Network , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[10]  Thomas Brox,et al.  3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation , 2016, MICCAI.

[11]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[12]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Roberto Cipolla,et al.  SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  Polina Golland,et al.  An image analysis toolbox for high-throughput C. elegans assays , 2012, Nature Methods.

[15]  Yoshua Bengio,et al.  Count-ception: Counting by Fully Convolutional Redundant Counting , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).

[16]  Yoshua Bengio,et al.  The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[17]  Edward J. Delp,et al.  Three Dimensional Fluorescence Microscopy Image Synthesis and Segmentation , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[18]  Thomas Brox,et al.  U-Net: deep learning for cell counting, detection, and morphometry , 2018, Nature Methods.

[19]  Marco Stampanoni,et al.  Phase-contrast tomography at the nanoscale using hard x rays , 2010 .

[20]  Anne E Carpenter,et al.  CellProfiler: image analysis software for identifying and quantifying cell phenotypes , 2006, Genome Biology.

[21]  Yoshua Bengio,et al.  How transferable are features in deep neural networks? , 2014, NIPS.

[22]  Johannes E. Schindelin,et al.  Fiji: an open-source platform for biological-image analysis , 2012, Nature Methods.

[23]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[24]  Timo Aila,et al.  A Style-Based Generator Architecture for Generative Adversarial Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).