Greedy Layer-Wise Training of Deep Networks

Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. Hinton et al. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

[1]  J. Håstad Computational limitations of small-depth circuits , 1987 .

[2]  Christian Lebiere,et al.  The Cascade-Correlation Learning Architecture , 1989, NIPS.

[3]  Geoffrey E. Hinton,et al.  The "wake-sleep" algorithm for unsupervised neural networks. , 1995, Science.

[4]  Régis Lengellé,et al.  Training MLPs layer by layer using an objective function for internal representations , 1996, Neural Networks.

[5]  Eric Allender,et al.  Circuit Complexity before the Dawn of the New Millennium , 1996, FSTTCS.

[6]  Paul Mineiro,et al.  A Monte Carlo EM Approach for Partially Observable Diffusion Processes: Theory and Applications to Neural Networks , 2002, Neural Computation.

[7]  Paul E. Utgoff,et al.  Many-Layered Learning , 2002, Neural Computation.

[8]  Geoffrey E. Hinton Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.

[9]  Alan F. Murray,et al.  Continuous restricted Boltzmann machine with an implementable training algorithm , 2003 .

[10]  Geoffrey E. Hinton,et al.  Exponential Family Harmoniums with an Application to Information Retrieval , 2004, NIPS.

[11]  Gerald Tesauro,et al.  Practical issues in temporal difference learning , 1992, Machine Learning.

[12]  Nicolas Le Roux,et al.  Convex Neural Networks , 2005, NIPS.

[13]  Nicolas Le Roux,et al.  The Curse of Highly Variable Functions for Local Kernel Machines , 2005, NIPS.

[14]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[15]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[16]  Yoshua Bengio,et al.  Scaling learning algorithms towards AI , 2007 .