Greedy Layer-Wise Training of Long Short Term Memory Networks
暂无分享,去创建一个
Tao Mei | Ting Yao | Xinmei Tian | Xu Shen | Kaisheng Xu | Tao Mei | Xinmei Tian | Ting Yao | Xu Shen | Kaisheng Xu
[1] Peter Glöckner,et al. Why Does Unsupervised Pre-training Help Deep Learning? , 2013 .
[2] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[3] Wojciech Zaremba,et al. Learning to Execute , 2014, ArXiv.
[4] Christopher Joseph Pal,et al. Semi-supervised Learning with Encoder-Decoder Recurrent Neural Networks: Experiments with Motion Capture Sequences , 2015, ArXiv.
[5] Yann LeCun,et al. Discriminative Recurrent Sparse Auto-Encoders , 2013, ICLR.
[6] Jürgen Schmidhuber,et al. LSTM: A Search Space Odyssey , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[7] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[8] Mubarak Shah,et al. UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild , 2012, ArXiv.
[9] Pascal Vincent,et al. The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training , 2009, AISTATS.
[10] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[11] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[12] Thomas Hofmann,et al. Greedy Layer-Wise Training of Deep Networks , 2007 .
[13] Quoc V. Le,et al. Semi-supervised Sequence Learning , 2015, NIPS.
[14] Nitish Srivastava,et al. Unsupervised Learning of Video Representations using LSTMs , 2015, ICML.