Mining Causality from Texts for Question Answering System

This research aims to develop automatic knowledge mining of causality from texts for supporting an automatic question answering system (QA) in answering 'why' question, which is among the most crucial forms of questions. The out come of this research will assist people in diagnosing problems, such as in plant diseases, health, industrial and etc. While the previous works have extracted causality knowledge within only one or two adjacent EDUs (Elementary Discourse Units), this research focuses to mine causality knowledge existing within multiple EDUs which takes multiple causes and multiple effects in to consideration, where the adjacency between cause and effect is unnecessary. There are two main problems: how to identify the interesting causality events from documents, and how to identify the boundaries of the causative unit and the effective unit in term of the multiple EDUs. In addition, there are at least three main problems involved in boundaries identification: the implicit boundary delimiter, the nonadjacent cause-consequence, and the effect surrounded by causes. This research proposes using verb-pair rules learnt by comparing the Naive Bayes classifier (NB) and Support Vector Machine (SVM) to identify causality EDUs in Thai agricultural and health news domains. The boundary identification problems are solved by utilizing verb-pair rules, Centering Theory and cue phrase set. The reason for emphasizing on using verbs to extract causality is that they explicitly make, in a certain way, the consequent events of cause-effect, e.g. 'Aphids suck the sap from rice leaves. Then leaves will shrink. Later, they will become yellow and dry.'. The outcome of the proposed methodology shown that the verb-pair rules extracted from NB outperform those extracted from SVM when the corpus contains high occurence of each verb, while the results from SVM is better than NB when the corpus contains less occurence of each verb. The verb-pair rules extracted from NB for causality extraction has the highest precision (0.88) with the recall of 0.75 from the plant disease corpus whereas from SVM has the highest precision (0.89) with the recall of 0.76 from bird flu news. For boundary determination, our methodology can handle very well with approximate 96% accuracy. In addition, the extracted causality results from this research can be generalized as laws in the Inductive-Statistical theory of Hempel's explanation theory, which will be useful for QA and reasoning.

[1]  Daniel Marcu,et al.  Building a Discourse-Tagged Corpus in the Framework of Rhetorical Structure Theory , 2001, SIGDIAL Workshop.

[2]  Takashi Inui,et al.  Acquiring Causal Knowledge from Text Using Connective Markers , 2004 .

[3]  C. Pechsiri,et al.  Agricultural Knowledge Discovery from Semi-Structured Text , 2005 .

[4]  Roxana Gîrju,et al.  Automatic Detection of Causal Relations for Question Answering , 2003, ACL 2003.

[5]  Nattakan Pengphon,et al.  Word Formation Approach to Noun Phrase Analysis for Thai , 2002 .

[6]  Du-Seong Chang,et al.  Causal Relation Extraction Using Cue Phrase and Lexical Pair Probabilities , 2004, IJCNLP.

[7]  Nello Cristianini,et al.  An Introduction to Support Vector Machines and Other Kernel-based Learning Methods , 2000 .

[8]  George A. Miller,et al.  WordNet: A Lexical Database for English , 1995, HLT.

[9]  S. Glennan Rethinking Mechanistic Explanation , 2002, Philosophy of Science.

[10]  Véronique Moriceau Generating Intelligent Numerical Answers in a Question-Answering System , 2006, INLG.

[11]  Paul K. Moser,et al.  The Oxford handbook of epistemology , 2002 .

[12]  Asanee Kawtrakul,et al.  Thai Named Entity Extraction by incorporating Maximum Entropy Model with Simple Heuristic Information , 2004 .

[13]  Dustin Boswell,et al.  Introduction to Support Vector Machines , 2002 .

[14]  Daniel Marcu,et al.  An Unsupervised Approach to Recognizing Discourse Relations , 2002, ACL.

[15]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[16]  Dan I. Moldovan,et al.  Mining Answers for Causation Questions , 2002 .