暂无分享,去创建一个
[1] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[2] Vaibhava Goel,et al. Self-Critical Sequence Training for Image Captioning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Richard Socher,et al. A Deep Reinforced Model for Abstractive Summarization , 2017, ICLR.
[4] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[5] Graeme Hirst,et al. Hybrid Models for Lexical Acquisition of Correlated Styles , 2013, IJCNLP.
[6] Graeme Hirst,et al. A Multi-Dimensional Bayesian Approach to Lexical Style , 2013, NAACL.
[7] Marc'Aurelio Ranzato,et al. Sequence Level Training with Recurrent Neural Networks , 2015, ICLR.
[8] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[9] Balaji Vasan Srinivasan,et al. Adapting Language Models for Non-Parallel Author-Stylized Rewriting , 2019, AAAI.
[10] Shibamouli Lahiri,et al. Complexity of Word Collocation Networks: A Preliminary Structural Analysis , 2013, EACL.
[11] Razvan Pascanu,et al. Stabilizing Transformers for Reinforcement Learning , 2019, ICML.
[12] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[13] Balaji Vasan Srinivasan,et al. A Lexical, Syntactic, and Semantic Perspective for Understanding Style in Text , 2019, ArXiv.
[14] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[15] Yejin Choi,et al. The Curious Case of Neural Text Degeneration , 2019, ICLR.
[16] Udo Hahn,et al. EmoBank: Studying the Impact of Annotation Perspective and Representation Format on Dimensional Emotion Analysis , 2017, EACL.
[17] Zhoujun Li,et al. Harnessing Pre-Trained Neural Networks with Rules for Formality Style Transfer , 2019, EMNLP.
[18] Alec Radford,et al. Fine-Tuning Language Models from Human Preferences , 2019, ArXiv.
[19] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[20] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[21] Hang Li,et al. Paraphrase Generation with Deep Reinforcement Learning , 2017, EMNLP.
[22] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.