暂无分享,去创建一个
This document gives a knowledge-oriented analysis of about 20 interesting Recognizing Textual Entailment (RTE) examples, drawn from the 2005 RTE5 competition test set. The analysis ignores shallow statistical matching techniques between T and H, and rather asks: What would it take to reasonably infer that T implies H? What world knowledge would be needed for this task? Although such knowledge-intensive techniques have not had much success in RTE evaluations, ultimately an intelligent system should be expected to know and deploy this kind of world knowledge required to perform this kind of reasoning.
The selected examples are typically ones which our RTE system (called BLUE) got wrong and ones which require world knowledge to answer. In particular, the analysis covers cases where there was near-perfect lexical overlap between T and H, yet the entailment was NO, i.e., examples that most likely all current RTE systems will have got wrong. A nice example is #341 (page 26), that requires inferring from "a river floods" that "a river overflows its banks". Seems it should be easy, right? Enjoy!
[1] Peter Clark,et al. Large-scale extraction and use of knowledge from text , 2009, K-CAP '09.