Statistical Foundation of Variational Bayes Neural Networks

Despite the popularism of Bayesian neural networks in recent years, its use is somewhat limited in complex and big data situations due to the computational cost associated with full posterior evaluations. Variational Bayes (VB) provides a useful alternative to circumvent the computational cost and time complexity associated with the generation of samples from the true posterior using Markov Chain Monte Carlo (MCMC) techniques. The efficacy of the VB methods is well established in machine learning literature. However, its potential broader impact is hindered due to a lack of theoretical validity from a statistical perspective. However there are few results which revolve around the theoretical properties of VB, especially in non-parametric problems. In this paper, we establish the fundamental result of posterior consistency for the mean-field variational posterior (VP) for a feed-forward artificial neural network model. The paper underlines the conditions needed to guarantee that the VP concentrates around Hellinger neighborhoods of the true density function. Additionally, the role of the scale parameter and its influence on the convergence rates has also been discussed. The paper mainly relies on two results (1) the rate at which the true posterior grows (2) the rate at which the KL-distance between the posterior and variational posterior grows. The theory provides a guideline of building prior distributions for Bayesian NN models along with an assessment of accuracy of the corresponding VB implementation.

[1]  Radford M. Neal Bayesian training of backpropagation networks by the hybrid Monte-Carlo method , 1992 .

[2]  M. Stephens,et al.  Scalable Variational Inference for Bayesian Variable Selection in Regression, and Its Accuracy in Genetic Association Studies , 2012 .

[3]  David M. Blei,et al.  Variational Inference: A Review for Statisticians , 2016, ArXiv.

[4]  F. Frommlet,et al.  Deep Bayesian regression models , 2018, 1806.02160.

[5]  Qing Lu,et al.  Asymptotic properties of neural network sieve estimators , 2019, Journal of nonparametric statistics.

[6]  Guodong Zhang,et al.  Functional Variational Bayesian Neural Networks , 2019, ICLR.

[7]  Will Handley,et al.  Compromise-free Bayesian neural networks , 2020, ArXiv.

[8]  Christopher M. Bishop,et al.  Bayesian Neural Networks , 1997, J. Braz. Comput. Soc..

[9]  Josip Pečarić,et al.  The best bounds in Gautschi's inequality , 2000 .

[10]  H. Scheffé A Useful Convergence Theorem for Probability Distributions , 1947 .

[11]  Herbert K. H. Lee Consistency of posterior distributions for neural networks , 2000, Neural Networks.

[12]  Benjamin A. Logsdon,et al.  A variational Bayes algorithm for fast and accurate multiple locus genome-wide association analysis , 2010, BMC Bioinformatics.

[13]  F. Osório,et al.  Journal of the Brazilian Computer Society , 2009 .

[14]  Lawrence Carin,et al.  Learning Structured Weight Uncertainty in Bayesian Neural Networks , 2017, AISTATS.

[15]  Amir Husain,et al.  Bayesian Neural Networks , 2018, ArXiv.

[16]  Xiaotong Shen,et al.  On methods of sieves and penalization , 1997 .

[17]  F. Liang,et al.  Bayesian Neural Networks for Selection of Drug Sensitive Genes , 2018, Journal of the American Statistical Association.

[18]  David M. Blei,et al.  Frequentist Consistency of Variational Bayes , 2017, Journal of the American Statistical Association.

[19]  Jon A. Wellner,et al.  Weak Convergence and Empirical Processes: With Applications to Statistics , 1996 .

[20]  Julien Cornebise,et al.  Weight Uncertainty in Neural Network , 2015, ICML.

[21]  Yun Yang,et al.  On Statistical Optimality of Variational Bayes , 2018, AISTATS.

[22]  Alex Graves,et al.  Practical Variational Inference for Neural Networks , 2011, NIPS.

[23]  Tapabrata Maiti,et al.  Hierarchical Bayesian Neural Networks , 2004 .

[24]  L. Wasserman,et al.  The consistency of posterior distributions in nonparametric problems , 1999 .

[25]  Julien Cornebise,et al.  Weight Uncertainty in Neural Networks , 2015, ArXiv.

[26]  Debdeep Pati,et al.  $\alpha $-variational inference with statistical guarantees , 2017, The Annals of Statistics.

[27]  W. Wong,et al.  Probability inequalities for likelihood ratios and convergence rates of sieve MLEs , 1995 .

[28]  Halbert White,et al.  Connectionist nonparametric regression: Multilayer feedforward networks can learn arbitrary mappings , 1990, Neural Networks.

[29]  Jinchao Xu,et al.  Approximation rates for neural networks with general activation functions , 2020, Neural Networks.

[30]  Jouko Lampinen,et al.  Bayesian approach for neural networks--review and case studies , 2001, Neural Networks.

[31]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[32]  Chao Gao,et al.  Convergence rates of variational posterior distributions , 2017, The Annals of Statistics.