Multi-lingual Wikipedia Summarization and Title Generation On Low Resource Corpus
暂无分享,去创建一个
Wei Liu | Yinan Liu | Lei Li | Zuying Huang
[1] Fabrizio Silvestri,et al. HEADS: Headline Generation as Sequence Prediction Using an Abstract Feature-Rich Space , 2015, NAACL.
[2] Konstantin Lopyrev,et al. Generating News Headlines with Recurrent Neural Networks , 2015, ArXiv.
[3] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[4] Ben Taskar,et al. Determinantal Point Processes for Machine Learning , 2012, Found. Trends Mach. Learn..
[5] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[6] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[7] Andrew McCallum,et al. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.
[8] Xiaojun Wan,et al. From Neural Sentence Summarization to Headline Generation: A Coarse-to-Fine Approach , 2017, IJCAI.
[9] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[10] Naoaki Okazaki,et al. Neural Headline Generation on Abstract Meaning Representation , 2016, EMNLP.
[11] Enrique Alfonseca,et al. Description of the UAM system for generating very short summaries at DUC-2004 ∗ , 2003 .
[12] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[13] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[14] Yue Zhang,et al. Event-Driven Headline Generation , 2015, ACL.
[15] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[16] Xu Tan,et al. MASS: Masked Sequence to Sequence Pre-training for Language Generation , 2019, ICML.