暂无分享,去创建一个
[1] Danqi Chen,et al. Dense Passage Retrieval for Open-Domain Question Answering , 2020, EMNLP.
[2] Luke Zettlemoyer,et al. Zero-shot Entity Linking with Dense Entity Retrieval , 2019, ArXiv.
[3] Christopher D. Manning,et al. Leveraging Linguistic Structure For Open Domain Information Extraction , 2015, ACL.
[4] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[5] 知秀 柴田. 5分で分かる!? 有名論文ナナメ読み:Jacob Devlin et al. : BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding , 2020 .
[6] Dawn Song,et al. Language Models are Open Knowledge Graphs , 2020, ArXiv.
[7] Yury A. Malkov,et al. Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[8] Colin Raffel,et al. How Much Knowledge Can You Pack into the Parameters of a Language Model? , 2020, EMNLP.
[9] Ming-Wei Chang,et al. Natural Questions: A Benchmark for Question Answering Research , 2019, TACL.
[10] Jeff Johnson,et al. Billion-Scale Similarity Search with GPUs , 2017, IEEE Transactions on Big Data.
[11] Christophe Gravier,et al. T-REx: A Large Scale Alignment of Natural Language with Knowledge Base Triples , 2018, LREC.
[12] Fabio Petroni,et al. How Context Affects Language Models' Factual Predictions , 2020, AKBC.
[13] Nicola De Cao,et al. KILT: a Benchmark for Knowledge Intensive Language Tasks , 2020, NAACL.
[14] Nicola De Cao,et al. Autoregressive Entity Retrieval , 2020, ArXiv.
[15] Fabio Petroni,et al. Multi-Task Retrieval for Knowledge-Intensive Tasks , 2021, ACL.
[16] Omer Levy,et al. Zero-Shot Relation Extraction via Reading Comprehension , 2017, CoNLL.
[17] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[18] Sebastian Riedel,et al. Language Models as Knowledge Bases? , 2019, EMNLP.
[19] Ming-Wei Chang,et al. REALM: Retrieval-Augmented Language Model Pre-Training , 2020, ICML.
[20] Zhiyi Song,et al. Overview of Linguistic Resources for the TAC KBP 2017 Evaluations: Methodologies and Results , 2017, TAC.
[21] Alfio Gliozzo,et al. Inducing Implicit Relations from Text Using Distantly Supervised Deep Nets , 2018, International Semantic Web Conference.
[22] Fabio Petroni,et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks , 2020, NeurIPS.
[23] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[24] Danqi Chen,et al. Learning Dense Representations of Phrases at Scale , 2020, ACL.
[25] Danqi Chen,et al. Position-aware Attention and Supervised Data Improve Slot Filling , 2017, EMNLP.