Toward Better Storylines with Sentence-Level Language Models
暂无分享,去创建一个
Chris Callison-Burch | Daphne Ippolito | Douglas Eck | David Grangier | David Grangier | Chris Callison-Burch | D. Eck | Daphne Ippolito
[1] Eric Nyberg,et al. Storyboarding of Recipes: Grounded Contextual Generation , 2019, ACL.
[2] Yann Dauphin,et al. Strategies for Structuring Story Generation , 2019, ACL.
[3] Gongshen Liu,et al. A Character-Centric Neural Model for Automated Story Generation , 2020, AAAI.
[4] Markus Freitag,et al. APE at Scale and Its Implications on MT Evaluation Biases , 2019, WMT.
[5] Xiaoyan Zhu,et al. Story Ending Selection by Finding Hints From Pairwise Candidate Endings , 2019, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[6] Nenghai Yu,et al. Deliberation Networks: Sequence Generation Beyond One-Pass Decoding , 2017, NIPS.
[7] Yann Dauphin,et al. Hierarchical Neural Story Generation , 2018, ACL.
[8] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[9] Christian Chiarcos,et al. Resource-Lean Modeling of Coherence in Commonsense Stories , 2017, LSDSem@EACL.
[10] Nathanael Chambers,et al. A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories , 2016, NAACL.
[11] Yonatan Belinkov,et al. Linguistic Knowledge and Transferability of Contextual Representations , 2019, NAACL.
[12] Minlie Huang,et al. Story Ending Generation with Incremental Encoding and Commonsense Knowledge , 2018, AAAI.
[13] Hermann Ney,et al. LSTM Neural Networks for Language Modeling , 2012, INTERSPEECH.
[14] Wilson L. Taylor,et al. “Cloze Procedure”: A New Tool for Measuring Readability , 1953 .
[15] Dan Roth,et al. Story Comprehension for Predicting What Happens Next , 2017, EMNLP.
[16] Mark O. Riedl,et al. Event Representations for Automated Story Generation with Deep Neural Nets , 2017, AAAI.
[17] Christopher D. Manning,et al. Do Massively Pretrained Language Models Make Better Storytellers? , 2019, CoNLL.
[18] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[19] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[20] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[21] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[22] Ting Liu,et al. Story Ending Prediction by Transferable BERT , 2019, IJCAI.
[23] Yejin Choi,et al. Story Cloze Task: UW NLP System , 2017, LSDSem@EACL.
[24] Mirella Lapata,et al. Probabilistic Text Structuring: Experiments with Sentence Ordering , 2003, ACL.
[25] Claire Cardie,et al. Improving Machine Reading Comprehension with General Reading Strategies , 2018, NAACL.
[26] Kilian Q. Weinberger,et al. BERTScore: Evaluating Text Generation with BERT , 2019, ICLR.
[27] Sanja Fidler,et al. Skip-Thought Vectors , 2015, NIPS.
[28] Dan Roth,et al. A Joint Model for Semantic Sequences: Frames, Entities, Sentiments , 2017, CoNLL.
[29] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Nan Hua,et al. Universal Sentence Encoder , 2018, ArXiv.
[31] Wanxiang Che,et al. Discriminative Sentence Modeling for Story Ending Prediction , 2020, AAAI.
[32] Noah A. Smith,et al. Creative Writing with a Machine in the Loop: Case Studies on Slogans and Stories , 2018, IUI.
[33] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[34] Honglak Lee,et al. An efficient framework for learning sentence representations , 2018, ICLR.
[35] Dongyan Zhao,et al. Plan-And-Write: Towards Better Automatic Storytelling , 2018, AAAI.