Training opposing directed models using geometric mean matching

Unsupervised training of deep generative models containing latent variables and performing inference remains a challenging problem for complex, high dimensional distributions. One basic approach to this problem is the so called Helmholtz machine and it involves training an auxiliary model that helps to perform approximate inference jointly with the generative model which is to be fitted to the training data. The top-down generative model is typically realized as a directed model that starts from some prior at the top, down to the empirical distribution at the bottom. The approximate inference model runs in the opposite direction and is typically trained to efficiently infer high probability latent states given some observed data. Here we propose a new method, referred to as geometric mean matching (GMM), that is based on the idea that the generative model should be close to the class of distributions that can be modeled by our approximate inference distribution. We achieve this by interpreting both the top-down and the bottom-up directed models as approximate inference distributions and by defining the target distribution we fit to the training data to be the geometric mean of these two. We present an upper-bound for the log-likelihood of this model and we show that optimizing this bound will pressure the model to stay close to the approximate inference distributions. In the experimental section we demonstrate that we can use this approach to fit deep generative models with many layers of hidden binary stochastic variables to complex and high dimensional training distributions.

[1]  Geoffrey E. Hinton,et al.  The "wake-sleep" algorithm for unsupervised neural networks. , 1995, Science.

[2]  Yoshua Bengio,et al.  Reweighted Wake-Sleep , 2014, ICLR.

[3]  Radford M. Neal Annealed importance sampling , 1998, Stat. Comput..

[4]  Hugo Larochelle,et al.  The Neural Autoregressive Distribution Estimator , 2011, AISTATS.

[5]  Surya Ganguli,et al.  Deep Unsupervised Learning using Nonequilibrium Thermodynamics , 2015, ICML.

[6]  Daan Wierstra,et al.  Deep AutoRegressive Networks , 2013, ICML.

[7]  Yoshua Bengio,et al.  Blocks and Fuel: Frameworks for deep learning , 2015, ArXiv.

[8]  Geoffrey E. Hinton,et al.  Deep Boltzmann Machines , 2009, AISTATS.

[9]  Ruslan Salakhutdinov,et al.  Evaluating probabilities under high-dimensional latent variable models , 2008, NIPS.

[10]  Geoffrey E. Hinton,et al.  Varieties of Helmholtz Machine , 1996, Neural Networks.

[11]  Hugo Larochelle,et al.  A Deep and Tractable Density Estimator , 2013, ICML.

[12]  Geoffrey E. Hinton,et al.  The Helmholtz Machine , 1995, Neural Computation.

[13]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[14]  Ruslan Salakhutdinov,et al.  Accurate and conservative estimates of MRF log-likelihood using reverse annealing , 2014, AISTATS.

[15]  Daan Wierstra,et al.  Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.

[16]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[17]  Karol Gregor,et al.  Neural Variational Inference and Learning in Belief Networks , 2014, ICML.

[18]  Tom Schaul,et al.  Unit Tests for Stochastic Optimization , 2013, ICLR.

[19]  Yoshua. Bengio,et al.  Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..

[20]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.