暂无分享,去创建一个
[1] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[2] Dongyan Zhao,et al. Draft and Edit: Automatic Storytelling Through Multi-Pass Hierarchical Conditional Variational Autoencoder , 2020, AAAI.
[3] Timothy Baldwin,et al. An Automatic Approach for Document-level Topic Model Evaluation , 2017, CoNLL.
[4] Minlie Huang,et al. Long Text Generation by Modeling Sentence-Level and Discourse-Level Coherence , 2021, ACL.
[5] Pushmeet Kohli,et al. Story Cloze Evaluator: Vector Space Representation Evaluation by Predicting What Happens Next , 2016, RepEval@ACL.
[6] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[7] Amal Alabdulkarim,et al. Automatic Story Generation: Challenges and Attempts , 2021, NUSE.
[8] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[9] Graham Neubig,et al. Lagging Inference Networks and Posterior Collapse in Variational Autoencoders , 2019, ICLR.
[10] Dongyan Zhao,et al. Plan-And-Write: Towards Better Automatic Storytelling , 2018, AAAI.
[11] Yann Dauphin,et al. Hierarchical Neural Story Generation , 2018, ACL.
[12] Anima Anandkumar,et al. Controllable Story Generation with External Knowledge Using Large-Scale Language Models , 2020, EMNLP.
[13] Joelle Pineau,et al. Language GANs Falling Short , 2018, ICLR.
[14] Mahdieh Soleymani Baghshah,et al. Jointly Measuring Diversity and Quality in Text Generation Models , 2019, Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation.
[15] Pascal Vincent,et al. Do Sequence-to-sequence VAEs Learn Global Features of Sentences? , 2020, EMNLP.
[16] Thomas Wolf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[17] Yejin Choi,et al. The Curious Case of Neural Text Degeneration , 2019, ICLR.
[18] Guillaume Desjardins,et al. Understanding disentangling in β-VAE , 2018, ArXiv.
[19] Xiujun Li,et al. Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space , 2020, EMNLP.
[20] Diederik P. Kingma,et al. An Introduction to Variational Autoencoders , 2019, Found. Trends Mach. Learn..
[21] Minlie Huang,et al. A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation , 2020, TACL.
[22] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[23] Michael I. Jordan,et al. Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..
[24] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[25] Harsh Jhamtani,et al. Narrative Text Generation with a Latent Discrete Plan , 2020, EMNLP.
[26] Michael Röder,et al. Exploring the Space of Topic Coherence Measures , 2015, WSDM.
[27] Ruslan Salakhutdinov,et al. Importance Weighted Autoencoders , 2015, ICLR.
[28] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[29] Lei Zheng,et al. Texygen: A Benchmarking Platform for Text Generation Models , 2018, SIGIR.
[30] Percy Liang,et al. Unifying Human and Statistical Evaluation for Natural Language Generation , 2019, NAACL.
[31] Yiming Yang,et al. A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text , 2019, EMNLP.
[32] Minlie Huang,et al. UNION: An Unreferenced Metric for Evaluating Open-ended Story Generation , 2020, EMNLP.