Restricted Boltzmann Machines are Hard to Approximately Evaluate or Simulate
暂无分享,去创建一个
[1] Naoki Abe,et al. On the computational complexity of approximating distributions by probabilistic automata , 1990, Machine Learning.
[2] Yoshua Bengio,et al. An empirical evaluation of deep architectures on problems with many factors of variation , 2007, ICML '07.
[3] Kazuyuki Tanaka,et al. Approximate Learning Algorithm for Restricted Boltzmann Machines , 2008, 2008 International Conference on Computational Intelligence for Modelling Control & Automation.
[4] Dan Roth,et al. On the Hardness of Approximate Reasoning , 1993, IJCAI.
[5] Yoshua Bengio,et al. Why Does Unsupervised Pre-training Help Deep Learning? , 2010, AISTATS.
[6] David Haussler,et al. Unsupervised learning of distributions on binary vectors using two layer networks , 1991, NIPS 1991.
[7] Yoshua. Bengio,et al. Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..
[8] Yoshua Bengio,et al. Justifying and Generalizing Contrastive Divergence , 2009, Neural Computation.
[9] Elchanan Mossel,et al. The Complexity of Distinguishing Markov Random Fields , 2008, APPROX-RANDOM.
[10] Leslie G. Valiant,et al. Random Generation of Combinatorial Structures from a Uniform Distribution , 1986, Theor. Comput. Sci..
[11] Noga Alon,et al. Approximating the cut-norm via Grothendieck's inequality , 2004, STOC '04.
[12] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.
[13] Paul Smolensky,et al. Information processing in dynamical systems: foundations of harmony theory , 1986 .
[14] Ronitt Rubinfeld,et al. On the learnability of discrete distributions , 1994, STOC '94.
[15] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.