The Optimization of Parallel DBN Based on Spark
暂无分享,去创建一个
[1] Yoshua. Bengio,et al. Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..
[2] Dong Yu,et al. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs , 2014, INTERSPEECH.
[3] Chih-Jen Lin,et al. LIBSVM: A library for support vector machines , 2011, TIST.
[4] Marc'Aurelio Ranzato,et al. Large Scale Distributed Deep Networks , 2012, NIPS.
[5] Simon Haykin,et al. Neural Networks and Learning Machines , 2010 .
[6] Christian Igel,et al. An Introduction to Restricted Boltzmann Machines , 2012, CIARP.
[7] Michael McGill,et al. Introduction to Modern Information Retrieval , 1983 .
[8] Michael J. Franklin,et al. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing , 2012, NSDI.
[9] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[10] Marco Zorzi,et al. Parallelization of Deep Networks , 2012, ESANN.
[11] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[12] Tara N. Sainath,et al. Making Deep Belief Networks effective for large vocabulary continuous speech recognition , 2011, 2011 IEEE Workshop on Automatic Speech Recognition & Understanding.
[13] Jun S. Liu,et al. The Collapsed Gibbs Sampler in Bayesian Computations with Applications to a Gene Regulation Problem , 1994 .
[14] Yoshua Bengio,et al. Greedy Layer-Wise Training of Deep Networks , 2006, NIPS.
[15] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[16] Sanjay Ghemawat,et al. MapReduce: Simplified Data Processing on Large Clusters , 2004, OSDI.