Variational Variance: Simple and Reliable Predictive Variance Parameterization

An often overlooked sleight of hand performed with variational autoencoders (VAEs), which has proliferated the literature, is to misrepresent the posterior predictive (decoder) distribution's expectation as a sample from that distribution. Jointly modeling the mean and variance for a normal predictive distribution can result in fragile optimization where the ultimately learned parameters can be ineffective at generating realistic samples. The two most common principled methods to avoid this problem are to either fix the variance or use the single-parameter Bernoulli distribution--both have drawbacks, however. Unfortunately, the problem of jointly optimizing mean and variance networks affects not only unsupervised modeling of continuous data (a taxonomy for many VAE applications) but also regression tasks. To date, only a handful of papers have attempted to resolve these difficulties. In this article, we propose an alternative and attractively simple solution: treat predictive variance variationally. Our approach synergizes with existing VAE-specific theoretical results and, being probabilistically principled, provides access to Empirical Bayes and other such techniques that utilize the observed data to construct well-informed priors. We extend the VAMP prior, which assumes a uniform mixture, by inferring mixture proportions and assignments. This extension amplifies our ability to accurately capture heteroscedastic variance. Notably, our methods experimentally outperform existing techniques on supervised and unsupervised modeling of continuous data.

[1]  Jes Frellsen,et al.  Leveraging the Exact Likelihood of Deep Latent Variable Models , 2018, NeurIPS.

[2]  Sean Gerrish,et al.  Black Box Variational Inference , 2013, AISTATS.

[3]  Zoubin Ghahramani,et al.  Sparse Gaussian Processes using Pseudo-inputs , 2005, NIPS.

[4]  Max Welling,et al.  VAE with a VampPrior , 2017, AISTATS.

[5]  Nicki Skafte Detlefsen,et al.  Reliable training and estimation of variance networks , 2019, NeurIPS.

[6]  Guohua Pan,et al.  Local Regression and Likelihood , 1999, Technometrics.

[7]  R. Tibshirani,et al.  Local Likelihood Estimation , 1987 .

[8]  Shakir Mohamed,et al.  Implicit Reparameterization Gradients , 2018, NeurIPS.

[9]  David J. C. MacKay,et al.  A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.

[10]  M. Wand Local Regression and Likelihood , 2001 .

[11]  Ryan P. Adams,et al.  Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks , 2015, ICML.

[12]  David P. Wipf,et al.  Diagnosing and Enhancing VAE Models , 2019, ICLR.

[13]  H. Robbins A Stochastic Approximation Method , 1951 .

[14]  Zoubin Ghahramani,et al.  Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.

[15]  D. Horvitz,et al.  A Generalization of Sampling Without Replacement from a Finite Universe , 1952 .

[16]  Shie Mannor,et al.  Bayesian Reinforcement Learning: A Survey , 2015, Found. Trends Mach. Learn..

[17]  David A. Cohn,et al.  Active Learning with Statistical Models , 1996, NIPS.

[18]  Daan Wierstra,et al.  Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.

[19]  A. Weigend,et al.  Estimating the mean and variance of the target probability distribution , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[20]  David M. Blei,et al.  Variational Inference: A Review for Statisticians , 2016, ArXiv.

[21]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[22]  Pablo M. Olmos,et al.  Handling Incomplete Heterogeneous Data using VAEs , 2018, Pattern Recognit..

[23]  David M. Blei,et al.  Population Predictive Checks , 2019, ArXiv.

[24]  Tim Salimans,et al.  Fixed-Form Variational Posterior Approximation through Stochastic Linear Regression , 2012, ArXiv.

[25]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[26]  Jes Frellsen,et al.  missIWAE: Deep Generative Modelling and Imputation of Incomplete Data , 2018, ArXiv.

[27]  S. Srihari Mixture Density Networks , 1994 .

[28]  Oriol Vinyals,et al.  Neural Discrete Representation Learning , 2017, NIPS.

[29]  Neil D. Lawrence,et al.  Deep Gaussian Processes , 2012, AISTATS.

[30]  Hiroshi Takahashi,et al.  Student-t Variational Autoencoder for Robust Density Estimation , 2018, IJCAI.

[31]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[32]  Ali Razavi,et al.  Generating Diverse High-Fidelity Images with VQ-VAE-2 , 2019, NeurIPS.