暂无分享,去创建一个
Yoshua Bengio | Adam Trischler | Devansh Arpit | Samira Shabanian | Yoshua Bengio | Devansh Arpit | Adam Trischler | S. Shabanian
[1] Diederik P. Kingma,et al. Stochastic Gradient VB and the Variational Auto-Encoder , 2013 .
[2] Richard Socher,et al. Revisiting Activation Regularization for Language RNNs , 2017, ArXiv.
[3] Yoshua Bengio,et al. Char2Wav: End-to-End Speech Synthesis , 2017, ICLR.
[4] Yoshua Bengio,et al. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.
[5] Razvan Pascanu,et al. Understanding the exploding gradient problem , 2012, ArXiv.
[6] Ole Winther,et al. Sequential Neural Models with Stochastic Layers , 2016, NIPS.
[7] Hugo Larochelle,et al. MADE: Masked Autoencoder for Distribution Estimation , 2015, ICML.
[8] David Vázquez,et al. PixelVAE: A Latent Variable Model for Natural Images , 2016, ICLR.
[9] Samy Bengio,et al. Tacotron: Towards End-to-End Speech Synthesis , 2017, INTERSPEECH.
[10] Christopher Joseph Pal,et al. Twin Networks: Using the Future as a Regularizer , 2017, ArXiv.
[11] Beatrice Santorini,et al. Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.
[12] Ruslan Salakhutdinov,et al. Evaluating probabilities under high-dimensional latent variable models , 2008, NIPS.
[13] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[14] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[15] Christian Osendorfer,et al. Learning Stochastic Recurrent Networks , 2014, NIPS 2014.
[16] Yoshua Bengio,et al. Professor Forcing: A New Algorithm for Training Recurrent Networks , 2016, NIPS.
[17] Wojciech Zaremba,et al. Recurrent Neural Network Regularization , 2014, ArXiv.
[18] Yoshua Bengio,et al. SampleRNN: An Unconditional End-to-End Neural Audio Generation Model , 2016, ICLR.
[19] Hugo Larochelle,et al. Neural Autoregressive Distribution Estimation , 2016, J. Mach. Learn. Res..
[20] Yoshua Bengio,et al. Training opposing directed models using geometric mean matching , 2015, ArXiv.
[21] Yoshua Bengio,et al. Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations , 2016, ICLR.
[22] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[23] Tapani Raiko,et al. Iterative Neural Autoregressive Distribution Estimator NADE-k , 2014, NIPS.
[24] Samy Bengio,et al. Generating Sentences from a Continuous Space , 2015, CoNLL.
[25] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[26] Zoubin Ghahramani,et al. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks , 2015, NIPS.
[27] Alex Graves,et al. DRAW: A Recurrent Neural Network For Image Generation , 2015, ICML.
[28] Yoshua Bengio,et al. A Recurrent Latent Variable Model for Sequential Data , 2015, NIPS.
[29] Alexander J. Smola,et al. Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS) , 2014, KDD.
[30] Adam Coates,et al. Deep Voice: Real-time Neural Text-to-Speech , 2017, ICML.
[31] Yoshua Bengio,et al. Z-Forcing: Training Stochastic Recurrent Networks , 2017, NIPS.
[32] Max Welling,et al. Markov Chain Monte Carlo and Variational Inference: Bridging the Gap , 2014, ICML.
[33] Aaron C. Courville,et al. Recurrent Batch Normalization , 2016, ICLR.
[34] Koray Kavukcuoglu,et al. Pixel Recurrent Neural Networks , 2016, ICML.