暂无分享,去创建一个
Sebastian Riedel | Fabrizio Silvestri | Majid Yazdani | Marzieh Saeidi | James Thorne | Alon Halevy | A. Halevy | Majid Yazdani | James Thorne | Sebastian Riedel | F. Silvestri | Marzieh Saeidi
[1] Alex Wang,et al. What do you learn from context? Probing for sentence structure in contextualized word representations , 2019, ICLR.
[2] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[3] Dan Klein,et al. Neural Module Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Sihem Amer-Yahia,et al. Report on the DB/IR panel at SIGMOD 2005 , 2005, SGMD.
[5] Luke S. Zettlemoyer,et al. Dissecting Contextual Word Embeddings: Architecture and Representation , 2018, EMNLP.
[6] Andreas Vlachos,et al. FEVER: a Large-scale Dataset for Fact Extraction and VERification , 2018, NAACL.
[7] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[8] Percy Liang,et al. Compositional Semantic Parsing on Semi-Structured Tables , 2015, ACL.
[9] Kenton Lee,et al. Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension , 2019, EMNLP.
[10] Sebastian Riedel,et al. Constructing Datasets for Multi-hop Reading Comprehension Across Documents , 2017, TACL.
[11] Jason Weston,et al. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks , 2015, ICLR.
[12] Nicola De Cao,et al. KILT: a Benchmark for Knowledge Intensive Language Tasks , 2020, ArXiv.
[13] Steven C. H. Hoi,et al. Photon: A Robust Cross-Domain Text-to-SQL System , 2020, ACL.
[14] Peter Dayan,et al. Q-learning , 1992, Machine Learning.
[15] AnHai Doan,et al. Deep entity matching with pre-trained language models , 2020, VLDB 2020.
[16] Margaret Mitchell,et al. VQA: Visual Question Answering , 2015, International Journal of Computer Vision.
[17] Furu Wei,et al. Visualizing and Understanding the Effectiveness of BERT , 2019, EMNLP.
[18] Fabio Petroni,et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks , 2020, NeurIPS.
[19] Peter Thanisch,et al. Natural language interfaces to databases – an introduction , 1995, Natural Language Engineering.
[20] Iain Murray,et al. BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning , 2019, ICML.
[21] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[22] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[23] Oren Etzioni,et al. Crossing the Structure Chasm , 2003, CIDR.
[24] Il-Yeol Song,et al. ODYS: an approach to building a massively-parallel search engine using a DB-IR tightly-integrated parallel DBMS for higher-level functionality , 2013, SIGMOD '13.
[25] Ming-Wei Chang,et al. REALM: Retrieval-Augmented Language Model Pre-Training , 2020, ICML.
[26] Mathijs Mul,et al. Compositionality Decomposed: How do Neural Networks Generalise? , 2019, J. Artif. Intell. Res..
[27] Edward Grefenstette,et al. Differentiable Reasoning on Large Knowledge Bases and Natural Language , 2019, Knowledge Graphs for eXplainable Artificial Intelligence.
[28] Tim Kraska,et al. The Case for Learned Index Structures , 2018 .
[29] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[30] Jeff Johnson,et al. Billion-Scale Similarity Search with GPUs , 2017, IEEE Transactions on Big Data.
[31] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[32] Hannaneh Hajishirzi,et al. Multi-hop Reading Comprehension through Question Decomposition and Rescoring , 2019, ACL.
[33] Danqi Chen,et al. Dense Passage Retrieval for Open-Domain Question Answering , 2020, EMNLP.
[34] Gerhard Weikum. DB&IR: both sides now , 2007, SIGMOD '07.
[35] Daniel Jurafsky,et al. Understanding Neural Networks through Representation Erasure , 2016, ArXiv.
[36] Peter Clark,et al. Transformers as Soft Reasoners over Language , 2020, ArXiv.
[37] Jason Weston,et al. End-To-End Memory Networks , 2015, NIPS.
[38] Daniel Deutch,et al. Break It Down: A Question Understanding Benchmark , 2020, TACL.
[39] Tim Rocktäschel,et al. End-to-end Differentiable Proving , 2017, NIPS.
[40] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[41] Markus Krötzsch,et al. Wikidata , 2014, Commun. ACM.
[42] Jae-Gil Lee,et al. DB-IR integration using tight-coupling in the Odysseus DBMS , 2013, World Wide Web.
[43] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[44] Jonathan Berant,et al. The Web as a Knowledge-Base for Answering Complex Questions , 2018, NAACL.
[45] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[46] Andrew Chou,et al. Semantic Parsing on Freebase from Question-Answer Pairs , 2013, EMNLP.
[47] Richard Socher,et al. Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering , 2019, ICLR.
[48] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[49] Matthew Richardson,et al. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text , 2013, EMNLP.
[50] Norbert Fuhr,et al. Models for Integrated Information Retrieval and Database Systems. , 1996 .
[51] Guillaume Bouchard,et al. Interpretation of Natural Language Rules in Conversational Machine Reading , 2018, EMNLP.
[52] Sebastian Riedel,et al. Language Models as Knowledge Bases? , 2019, EMNLP.
[53] Edouard Grave,et al. Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering , 2020, EACL.
[54] Gabriel Stanovsky,et al. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs , 2019, NAACL.
[55] Han Fang,et al. Linformer: Self-Attention with Linear Complexity , 2020, ArXiv.
[56] Fei Li,et al. Constructing an Interactive Natural Language Interface for Relational Databases , 2014, Proc. VLDB Endow..
[57] Theodoros Rekatsinas,et al. Deep Learning for Entity Matching: A Design Space Exploration , 2018, SIGMOD Conference.