A Flexible and Extensible Framework for Multiple Answer Modes Question Answering
暂无分享,去创建一个
Yu Tsao | Keh-Yih Su | Hsin-Min Wang | Kuang-Yu Chang | Chia-Chih Kuo | Shang-Bao Luo | Meng-Tse Wu | Kuan-Yu Chen | Cheng-Chung Fan | Pei-Jun Liao | Chiao-Wei Hsu | Shih-Hong Tsai | Tzu-Man Wu | Aleksandra Smolka | Chao-Chun Liang
[1] Zhen Huang,et al. A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning , 2019, EMNLP.
[2] R. Thomas McCoy,et al. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference , 2019, ACL.
[3] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[4] Jia-Fei Hong,et al. 中文词汇网络:跨语言知识处理基础架构的设计理念与实践 = Chinese wordnet : design, implementation, and application of an infrastructure for cross-lingual knowledge processing , 2010 .
[5] D. Gentner,et al. Structure mapping in analogy and similarity. , 1997 .
[6] Mohit Bansal,et al. Revealing the Importance of Semantic Retrieval for Machine Reading at Scale , 2019, EMNLP.
[7] Yuting Lai,et al. DRCD: a Chinese Machine Reading Comprehension Dataset , 2018, ArXiv.
[8] Marie-Catherine de Marneffe,et al. Evaluating BERT for natural language inference: A case study on the CommitmentBank , 2019, EMNLP.
[9] Hai Zhao,et al. Retrospective Reader for Machine Reading Comprehension , 2020, AAAI.
[10] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[11] Kenton Lee,et al. Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension , 2019, EMNLP.
[12] Zhoujun Li,et al. Ensemble Neural Relation Extraction with Adaptive Boosting , 2018, IJCAI.
[13] Philip Bachman,et al. NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.
[14] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[15] David A. Ferrucci,et al. Introduction to "This is Watson" , 2012, IBM J. Res. Dev..
[16] Carolyn Penstein Rosé,et al. Stress Test Evaluation for Natural Language Inference , 2018, COLING.
[17] Yang Liu,et al. Fine-tune BERT for Extractive Summarization , 2019, ArXiv.
[18] Catherine Havasi,et al. ConceptNet 5.5: An Open Multilingual Graph of General Knowledge , 2016, AAAI.
[19] Rachel Rudinger,et al. Hypothesis Only Baselines in Natural Language Inference , 2018, *SEMEVAL.
[20] Hsin-Hsi Chen,et al. 廣義知網詞彙意見極性的預測 (Predicting the Semantic Orientation of Terms in E-HowNet) [In Chinese] , 2011, ROCLING/IJCLCLP.