An autoencoder network uses a set of recognition weights to convert an input vector into a code vector. It then uses a set of generative weights to convert the code vector into an approximate reconstruction of the input vector. We derive an objective function for training autoencoders based on the Minimum Description Length (MDL) principle. The aim is to minimize the information required to describe both the code vector and the reconstruction error. We show that this information is minimized by choosing code vectors stochastically according to a Boltzmann distribution, where the generative weights define the energy of each possible code vector given the input vector. Unfortunately, if the code vectors use distributed representations, it is exponentially expensive to compute this Boltzmann distribution because it involves all possible code vectors. We show that the recognition weights of an autoencoder can be used to compute an approximation to the Boltzmann distribution and that this approximation gives an upper bound on the description length. Even when this bound is poor, it can be used as a Lyapunov function for learning both the generative and the recognition weights. We demonstrate that this approach can be used to learn factorial codes.
[1]
Kurt Hornik,et al.
Neural networks and principal component analysis: Learning from examples without local minima
,
1989,
Neural Networks.
[2]
Geoffrey E. Hinton.
Connectionist Learning Procedures
,
1989,
Artif. Intell..
[3]
Geoffrey E. Hinton,et al.
Developing Population Codes by Minimizing Description Length
,
1993,
NIPS.
[4]
Radford M. Neal.
A new view of the EM algorithm that justifies incremental and other variants
,
1993
.
[5]
C. Galland.
The limitations of deterministic Boltzmann machine learning
,
1993
.
[6]
R. Zemel.
A minimum description length framework for unsupervised learning
,
1994
.