CoT: Cooperative Training for Generative Modeling of Discrete Data
暂无分享,去创建一个
Lantao Yu | Weinan Zhang | Sidi Lu | Yaoming Zhu | Siyuan Feng | Weinan Zhang | Lantao Yu | Sidi Lu | Siyuan Feng | Sidi Lu | Yaoming Zhu
[1] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[2] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[3] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[4] Yoshua Bengio,et al. Maximum-Likelihood Augmented Discrete Generative Adversarial Networks , 2017, ArXiv.
[5] Andrea Vedaldi,et al. Instance Normalization: The Missing Ingredient for Fast Stylization , 2016, ArXiv.
[6] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[7] Samy Bengio,et al. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks , 2015, NIPS.
[8] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[9] Richard S. Sutton,et al. Temporal credit assignment in reinforcement learning , 1984 .
[10] Lei Zheng,et al. Texygen: A Benchmarking Platform for Text Generation Models , 2018, SIGIR.
[11] Ian J. Goodfellow,et al. NIPS 2016 Tutorial: Generative Adversarial Networks , 2016, ArXiv.
[12] Ferenc Huszar,et al. How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary? , 2015, ArXiv.
[13] Yong Yu,et al. Long Text Generation via Adversarial Training with Leaked Information , 2017, AAAI.
[14] Lantao Yu,et al. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient , 2016, AAAI.
[15] Léon Bottou,et al. Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.
[16] Kevin Lin,et al. Adversarial Ranking for Language Generation , 2017, NIPS.
[17] Matt J. Kusner,et al. From Word Embeddings To Document Distances , 2015, ICML.
[18] Yong Yu,et al. Neural Text Generation: Past, Present and Beyond , 2018, 1803.07133.
[19] Yoshua Bengio,et al. Professor Forcing: A New Algorithm for Training Recurrent Networks , 2016, NIPS.
[20] Marc'Aurelio Ranzato,et al. Sequence Level Training with Recurrent Neural Networks , 2015, ICLR.
[21] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[22] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..