暂无分享,去创建一个
Hiroyuki Shindo | Ikuya Yamada | Hideaki Takeda | Akari Asai | Yuji Matsumoto | Yuji Matsumoto | Hideaki Takeda | Ikuya Yamada | Akari Asai | Hiroyuki Shindo
[1] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[2] Roland Vollgraf,et al. Contextual String Embeddings for Sequence Labeling , 2018, COLING.
[3] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[4] Simon Ostermann,et al. Commonsense Inference in Natural Language Processing (COIN) - Shared Task Report , 2019, EMNLP.
[5] Hiroyuki Shindo,et al. Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation , 2016, CoNLL.
[6] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[7] Guillaume Bouchard,et al. Complex Embeddings for Simple Link Prediction , 2016, ICML.
[8] Wenhan Xiong,et al. Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model , 2019, ICLR.
[9] Xu Chen,et al. Bridge Text and Knowledge by Learning Multi-Prototype Entity Mention Embedding , 2017, ACL.
[10] Guillaume Lample,et al. Neural Architectures for Named Entity Recognition , 2016, NAACL.
[11] Christopher Clark,et al. Simple and Effective Multi-Paragraph Reading Comprehension , 2017, ACL.
[12] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.
[13] Hiroyuki Shindo,et al. Learning Distributed Representations of Texts and Entities from Knowledge Base , 2017, TACL.
[14] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[15] Makoto Miwa,et al. Deep Exhaustive Model for Nested Named Entity Recognition , 2018, EMNLP.
[16] Christopher D. Manning,et al. Graph Convolution over Pruned Dependency Trees Improves Relation Extraction , 2018, EMNLP.
[17] Omer Levy,et al. Ultra-Fine Entity Typing , 2018, ACL.
[18] Maosong Sun,et al. ERNIE: Enhanced Language Representation with Informative Entities , 2019, ACL.
[19] Tianyu Gao,et al. KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation , 2019, ArXiv.
[20] Luke S. Zettlemoyer,et al. Cloze-driven Pretraining of Self-attention Networks , 2019, EMNLP.
[21] Omer Levy,et al. What Does BERT Look at? An Analysis of BERT’s Attention , 2019, BlackboxNLP@ACL.
[22] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[23] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[24] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[25] Kevin Gimpel,et al. Gaussian Error Linear Units (GELUs) , 2016 .
[26] Jeffrey Ling,et al. Learning Cross-Context Entity Representations from Text , 2020, ArXiv.
[27] Guotong Xie,et al. Pingan Smart Health and SJTU at COIN - Shared Task: utilizing Pre-trained Language Models and Common-sense Knowledge in Machine Reading Tasks , 2019, EMNLP.
[28] Thomas Hofmann,et al. Deep Joint Entity Disambiguation with Local Neural Attention , 2017, EMNLP.
[29] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[30] Roy Schwartz,et al. Knowledge Enhanced Contextual Word Representations , 2019, EMNLP/IJCNLP.
[31] Omer Levy,et al. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , 2019, NeurIPS.
[32] Erik F. Tjong Kim Sang,et al. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition , 2003, CoNLL.
[33] Danqi Chen,et al. Position-aware Attention and Supervised Data Improve Slot Filling , 2017, EMNLP.
[34] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[35] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[36] Jeffrey Ling,et al. Matching the Blanks: Distributional Similarity for Relation Learning , 2019, ACL.
[37] Geoffrey E. Hinton,et al. Layer Normalization , 2016, ArXiv.
[38] Martin Wattenberg,et al. Visualizing and Measuring the Geometry of BERT , 2019, NeurIPS.
[39] Ruize Wang,et al. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters , 2020, ArXiv.
[40] Jason Weston,et al. Translating Embeddings for Modeling Multi-relational Data , 2013, NIPS.
[41] Jianfeng Gao,et al. Embedding Entities and Relations for Learning and Inference in Knowledge Bases , 2014, ICLR.
[42] Xiaodong Liu,et al. ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension , 2018, ArXiv.