A Systematic Characterization of Sampling Algorithms for Open-ended Language Generation
暂无分享,去创建一个
James R. Glass | James Glass | Kyunghyun Cho | Tianxing He | Moin Nadeem | Kyunghyun Cho | Tianxing He | Moin Nadeem
[1] Lukás Burget,et al. Recurrent neural network based language model , 2010, INTERSPEECH.
[2] Lantao Yu,et al. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient , 2016, AAAI.
[3] Alan Ritter,et al. Adversarial Learning for Neural Dialogue Generation , 2017, EMNLP.
[4] Daphne Ippolito,et al. Trading Off Diversity and Quality in Natural Language Generation , 2020, HUMEVAL.
[5] Yann Dauphin,et al. Hierarchical Neural Story Generation , 2018, ACL.
[6] James R. Glass,et al. Negative Training for Neural Dialogue Response Generation , 2019, ACL.
[7] Chris Callison-Burch,et al. Comparison of Diverse Decoding Methods from Conditional Language Models , 2019, ACL.
[8] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[9] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[10] Erik Cambria,et al. Recent Trends in Deep Learning Based Natural Language Processing , 2017, IEEE Comput. Intell. Mag..
[11] Yejin Choi,et al. The Curious Case of Neural Text Degeneration , 2019, ICLR.
[12] Joelle Pineau,et al. Language GANs Falling Short , 2018, ICLR.
[13] Terrence J. Sejnowski,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cognitive Sciences.
[14] Benjamin Van Durme,et al. Annotated Gigaword , 2012, AKBC-WEKEX@NAACL-HLT.
[15] Lei Zheng,et al. Texygen: A Benchmarking Platform for Text Generation Models , 2018, SIGIR.
[16] Kyunghyun Cho,et al. Consistency of a Recurrent Language Model With Respect to Incomplete Decoding , 2020, EMNLP.
[17] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[18] Jianfeng Gao,et al. DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation , 2020, ACL.
[19] Pietro Perona,et al. Microsoft COCO: Common Objects in Context , 2014, ECCV.
[20] Anja Belz,et al. Comparing Automatic and Human Evaluation of NLG Systems , 2006, EACL.
[21] Percy Liang,et al. Unifying Human and Statistical Evaluation for Natural Language Generation , 2019, NAACL.
[22] Zhe Gan,et al. Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization , 2018, NeurIPS.
[23] Tie-Yan Liu,et al. Adversarial Neural Machine Translation , 2017, ACML.
[24] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[25] Richard Socher,et al. Pointer Sentinel Mixture Models , 2016, ICLR.
[26] Chris Callison-Burch,et al. Human and Automatic Detection of Generated Text , 2019, ArXiv.