Bayesian Neural Networks for Cellular Image Classification and Uncertainty Analysis

Over the last decades, deep learning models have rapidly gained popularity for their ability to achieve state-of-the-art performances in different inference settings. Novel domains of application define a new set of requirements that transcend accurate predictions and depend on uncertainty measures. The aims of this study are to implement Bayesian neural networks and use the corresponding uncertainty estimates to improve predictions and perform dataset analysis. We identify two main advantages in modeling the predictive uncertainty of deep neural networks performing classification tasks. The first is the possibility to discard highly uncertain predictions to increase model accuracy. The second is the identification of unfamiliar patterns in the data that correspond to outliers in the model representation of the training data distribution. Such outliers can be further characterized as either corrupted observations or data belonging to different domains. Both advantages are well demonstrated on benchmark datasets. Furthermore we apply the Bayesian approach to a biomedical imaging dataset where cancer cells are treated with diverse drugs, and show how one can increase classification accuracy and identify noise in the ground truth labels with uncertainty analysis.

[1]  Bastian Goldlücke,et al.  Variational Analysis , 2014, Computer Vision, A Reference Guide.

[2]  Philipp Hennig,et al.  Approximate inference in graphical models , 2011 .

[3]  T. J. Mitchell,et al.  Bayesian Variable Selection in Linear Regression , 1988 .

[4]  Radford M. Neal MCMC Using Hamiltonian Dynamics , 2011, 1206.1901.

[5]  Alex Graves,et al.  Practical Variational Inference for Neural Networks , 2011, NIPS.

[6]  Vasily Tolkachev,et al.  Know When You Don't Know: A Robust Deep Learning Approach in the Presence of Unknown Phenotypes. , 2018, Assay and drug development technologies.

[7]  Chong Wang,et al.  Stochastic variational inference , 2012, J. Mach. Learn. Res..

[8]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[9]  Zoubin Ghahramani,et al.  Probabilistic machine learning and artificial intelligence , 2015, Nature.

[10]  Siegfried Wahl,et al.  Leveraging uncertainty information from deep neural networks for disease detection , 2016, Scientific Reports.

[11]  David M. Blei,et al.  A Variational Analysis of Stochastic Gradient Algorithms , 2016, ICML.

[12]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[13]  Wlodzislaw Duch Coloring black boxes: visualization of neural network decisions , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[14]  Marc Berndl,et al.  Improving Phenotypic Measurements in High-Content Imaging Screens , 2017, bioRxiv.

[15]  Gregory Cohen,et al.  EMNIST: an extension of MNIST to handwritten letters , 2017, CVPR 2017.

[16]  Geoffrey E. Hinton,et al.  Keeping the neural networks simple by minimizing the description length of the weights , 1993, COLT '93.

[17]  Luís M. Silva,et al.  High-Content Analysis of Breast Cancer Using Single-Cell Deep Transfer Learning , 2016, Journal of biomolecular screening.

[18]  Ignacio Requena,et al.  Are artificial neural networks black boxes? , 1997, IEEE Trans. Neural Networks.

[19]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[20]  David M. Blei,et al.  Stochastic Gradient Descent as Approximate Bayesian Inference , 2017, J. Mach. Learn. Res..

[21]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[22]  Xian Zhang,et al.  A multi‐scale convolutional neural network for phenotyping high‐content cellular images , 2017, Bioinform..

[23]  Zoubin Ghahramani,et al.  Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.

[24]  Neil O Carragher,et al.  High-Content Phenotypic Profiling of Drug Response Signatures across Distinct Cancer Cells , 2010, Molecular Cancer Therapeutics.

[25]  Anne E Carpenter,et al.  Comparison of Methods for Image-Based Profiling of Cellular Morphological Responses to Small-Molecule Treatment , 2013, Journal of biomolecular screening.

[26]  Sean Gerrish,et al.  Black Box Variational Inference , 2013, AISTATS.

[27]  Zoubin Ghahramani,et al.  Variational Bayesian dropout: pitfalls and fixes , 2018, ICML.

[28]  David Barber,et al.  A Scalable Laplace Approximation for Neural Networks , 2018, ICLR.

[29]  Daan Wierstra,et al.  Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.

[30]  Xian Zhang,et al.  Unsupervised phenotypic analysis of cellular images with multi-scale convolutional neural networks , 2018, bioRxiv.

[31]  Myunghee Cho Paik,et al.  Uncertainty quantification using Bayesian neural networks in classification: Application to ischemic stroke lesion segmentation , 2018 .

[32]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[33]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[34]  Alex Kendall,et al.  What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.

[35]  Anne E Carpenter,et al.  Annotated high-throughput microscopy image sets for validation , 2012, Nature Methods.

[36]  A. Kiureghian,et al.  Aleatory or epistemic? Does it matter? , 2009 .

[37]  Max Welling,et al.  Variational Dropout and the Local Reparameterization Trick , 2015, NIPS 2015.

[38]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[39]  Michael I. Jordan,et al.  An Introduction to Variational Methods for Graphical Models , 1999, Machine Learning.

[40]  Nitish Srivastava,et al.  Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.

[41]  Julien Cornebise,et al.  Weight Uncertainty in Neural Networks , 2015, ArXiv.

[42]  Ariel D. Procaccia,et al.  Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.