Deep, Narrow Sigmoid Belief Networks Are Universal Approximators
暂无分享,去创建一个
[1] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[2] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[3] D. Rumelhart. Learning internal representations by back-propagating errors , 1986 .
[4] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[5] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.
[6] Raúl Rojas. Networks of width one are universal classifiers , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..
[7] Yoshua Bengio,et al. Greedy Layer-Wise Training of Deep Networks , 2006, NIPS.
[8] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[9] Jason Weston,et al. Large-scale kernel machines , 2007 .
[10] Yoshua Bengio,et al. Scaling learning algorithms towards AI , 2007 .
[11] Yoshua Bengio,et al. An empirical evaluation of deep architectures on problems with many factors of variation , 2007, ICML '07.
[12] Geoffrey E. Hinton,et al. Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes , 2007, NIPS.
[13] Nicolas Le Roux,et al. Representational Power of Restricted Boltzmann Machines and Deep Belief Networks , 2008, Neural Computation.
[14] Geoffrey E. Hinton. Reducing the Dimensionality of Data with Neural , 2008 .
[15] Geoffrey E. Hinton,et al. Semantic hashing , 2009, Int. J. Approx. Reason..
[16] Geoffrey E. Hinton. Learning to represent visual input , 2010, Philosophical Transactions of the Royal Society B: Biological Sciences.