Probabilistic models are a critical part of the modern deep learning toolbox - ranging from generative models (VAEs, GANs), sequence to sequence models used in machine translation and speech processing to models over functional spaces (conditional neural processes, neural processes). Given the size and complexity of these models, safely deploying them in applications requires the development of tools to analyze their behavior rigorously and provide some guarantees that these models are consistent with a list of desirable properties or specifications. For example, a machine translation model should produce semantically equivalent outputs for innocuous changes in the input to the model. A functional regression model that is learning a distribution over monotonic functions should predict a larger value at a larger input. Verification of these properties requires a new framework that goes beyond notions of verification studied in deterministic feedforward networks, since requiring worst-case guarantees in probabilistic models is likely to produce conservative or vacuous results. We propose a novel formulation of verification for deep probabilistic models that take in conditioning inputs and sample latent variables in the course of producing an output: We require that the output of the model satisfies a linear constraint with high probability over the sampling of latent variables and for every choice of conditioning input to the model. We show that rigorous lower bounds on the probability that the constraint is satisfied can be obtained efficiently. Experiments with neural processes show that several properties of interest while modeling functional spaces can be modeled within this framework (monotonicity, convexity) and verified efficiently using our algorithms
[1]
Pushmeet Kohli,et al.
Piecewise Linear Neural Network verification: A comparative study
,
2017,
ArXiv.
[2]
Honglak Lee,et al.
Learning Structured Output Representation using Deep Conditional Generative Models
,
2015,
NIPS.
[3]
Mykel J. Kochenderfer,et al.
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
,
2017,
CAV.
[4]
Yee Whye Teh,et al.
Neural Processes
,
2018,
ArXiv.
[5]
Yee Whye Teh,et al.
Conditional Neural Processes
,
2018,
ICML.
[6]
Max Welling,et al.
Auto-Encoding Variational Bayes
,
2013,
ICLR.
[7]
Peter I. Frazier,et al.
A Tutorial on Bayesian Optimization
,
2018,
ArXiv.
[8]
Yoshua Bengio,et al.
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
,
2015,
ICML.
[9]
Inderjit S. Dhillon,et al.
Towards Fast Computation of Certified Robustness for ReLU Networks
,
2018,
ICML.
[10]
Heiga Zen,et al.
WaveNet: A Generative Model for Raw Audio
,
2016,
SSW.