暂无分享,去创建一个
Jiyeon Han | Cheongwoong Kang | Sunjae Kwon | Jaesik Choi | Jaesik Choi | Sunjae Kwon | Cheongwoong Kang | Jiyeon Han
[1] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[2] Omer Levy,et al. What Does BERT Look at? An Analysis of BERT’s Attention , 2019, BlackboxNLP@ACL.
[3] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[4] Alex Wang,et al. What do you learn from context? Probing for sentence structure in contextualized word representations , 2019, ICLR.
[5] Terry Winograd,et al. Understanding natural language , 1974 .
[6] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[7] Erik T. Mueller,et al. Open Mind Common Sense: Knowledge Acquisition from the General Public , 2002, OTM.
[8] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[9] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[10] Christopher D. Manning,et al. Introduction to Information Retrieval , 2010, J. Assoc. Inf. Sci. Technol..
[11] Jason Weston,et al. Translating Embeddings for Modeling Multi-relational Data , 2013, NIPS.
[12] Iryna Gurevych,et al. LINSPECTOR: Multilingual Probing Tasks for Word Representations , 2019, CL.
[13] Todor Mihaylov,et al. Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with External Commonsense Knowledge , 2018, ACL.
[14] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[15] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[16] Wei Zhao,et al. Yuanfudao at SemEval-2018 Task 11: Three-way Attention and Relational Knowledge for Commonsense Machine Comprehension , 2018, SemEval@NAACL-HLT.
[17] Matthew Richardson,et al. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text , 2013, EMNLP.
[18] Carol A. Chapelle,et al. Cloze method: what difference does it make? , 1990 .
[19] Chris Dyer,et al. Dynamic Integration of Background Knowledge in Neural NLU Systems , 2017, 1706.02596.
[20] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[21] Alexander M. Rush,et al. Commonsense Knowledge Mining from Pretrained Models , 2019, EMNLP.
[22] Catherine Havasi,et al. ConceptNet 5.5: An Open Multilingual Graph of General Knowledge , 2016, AAAI.
[23] Jonathan Berant,et al. CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge , 2019, NAACL.