Variational Variance: Simple and Reliable Predictive Variance Parameterization

An often overlooked sleight of hand performed with variational autoencoders (VAEs), which has proliferated the literature, is to misrepresent the posterior predictive (decoder) distribution's expectation as a sample from that distribution. Jointly modeling the mean and variance for a normal predictive distribution can result in fragile optimization where the ultimately learned parameters can be ineffective at generating realistic samples. The two most common principled methods to avoid this problem are to either fix the variance or use the single-parameter Bernoulli distribution--both have drawbacks, however. Unfortunately, the problem of jointly optimizing mean and variance networks affects not only unsupervised modeling of continuous data (a taxonomy for many VAE applications) but also regression tasks. To date, only a handful of papers have attempted to resolve these difficulties. In this article, we propose an alternative and attractively simple solution: treat predictive variance variationally. Our approach synergizes with existing VAE-specific theoretical results and, being probabilistically principled, provides access to Empirical Bayes and other such techniques that utilize the observed data to construct well-informed priors. We extend the VAMP prior, which assumes a uniform mixture, by inferring mixture proportions and assignments. This extension amplifies our ability to accurately capture heteroscedastic variance. Notably, our methods experimentally outperform existing techniques on supervised and unsupervised modeling of continuous data.

[1]  Shakir Mohamed,et al.  Implicit Reparameterization Gradients , 2018, NeurIPS.

[2]  David A. Cohn,et al.  Active Learning with Statistical Models , 1996, NIPS.

[3]  Tim Salimans,et al.  Fixed-Form Variational Posterior Approximation through Stochastic Linear Regression , 2012, ArXiv.

[4]  Ali Razavi,et al.  Generating Diverse High-Fidelity Images with VQ-VAE-2 , 2019, NeurIPS.

[5]  Zoubin Ghahramani,et al.  Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.

[6]  David J. C. MacKay,et al.  A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.

[7]  Oriol Vinyals,et al.  Neural Discrete Representation Learning , 2017, NIPS.

[8]  David M. Blei,et al.  Population Predictive Checks , 2019, ArXiv.

[9]  Daan Wierstra,et al.  Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.

[10]  M. Wand Local Regression and Likelihood , 2001 .

[11]  Neil D. Lawrence,et al.  Deep Gaussian Processes , 2012, AISTATS.

[12]  David M. Blei,et al.  Variational Inference: A Review for Statisticians , 2016, ArXiv.

[13]  Max Welling,et al.  VAE with a VampPrior , 2017, AISTATS.

[14]  David P. Wipf,et al.  Diagnosing and Enhancing VAE Models , 2019, ICLR.

[15]  Shie Mannor,et al.  Bayesian Reinforcement Learning: A Survey , 2015, Found. Trends Mach. Learn..

[16]  Hiroshi Takahashi,et al.  Student-t Variational Autoencoder for Robust Density Estimation , 2018, IJCAI.

[17]  R. Tibshirani,et al.  Local Likelihood Estimation , 1987 .

[18]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[19]  D. Horvitz,et al.  A Generalization of Sampling Without Replacement from a Finite Universe , 1952 .

[20]  S. Srihari Mixture Density Networks , 1994 .

[21]  Nicki Skafte Detlefsen,et al.  Reliable training and estimation of variance networks , 2019, NeurIPS.

[22]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[23]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[24]  Jes Frellsen,et al.  Leveraging the Exact Likelihood of Deep Latent Variable Models , 2018, NeurIPS.

[25]  Zoubin Ghahramani,et al.  Sparse Gaussian Processes using Pseudo-inputs , 2005, NIPS.

[26]  H. Robbins A Stochastic Approximation Method , 1951 .

[27]  Sean Gerrish,et al.  Black Box Variational Inference , 2013, AISTATS.

[28]  Ryan P. Adams,et al.  Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks , 2015, ICML.

[29]  A. Weigend,et al.  Estimating the mean and variance of the target probability distribution , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[30]  Jes Frellsen,et al.  missIWAE: Deep Generative Modelling and Imputation of Incomplete Data , 2018, ArXiv.

[31]  Guohua Pan,et al.  Local Regression and Likelihood , 1999, Technometrics.

[32]  Pablo M. Olmos,et al.  Handling Incomplete Heterogeneous Data using VAEs , 2018, Pattern Recognit..