Inverting Grice's Maxims to Learn Rules from Natural Language Extractions

We consider the problem of learning rules from natural language text sources. These sources, such as news articles and web texts, are created by a writer to communicate information to a reader, where the writer and reader share substantial domain knowledge. Consequently, the texts tend to be concise and mention the minimum information necessary for the reader to draw the correct conclusions. We study the problem of learning domain knowledge from such concise texts, which is an instance of the general problem of learning in the presence of missing data. However, unlike standard approaches to missing data, in this setting we know that facts are more likely to be missing from the text in cases where the reader can infer them from the facts that are mentioned combined with the domain knowledge. Hence, we can explicitly model this "missingness" process and invert it via probabilistic inference to learn the underlying domain knowledge. This paper introduces a mention model that models the probability of facts being mentioned in the text based on what other facts have already been mentioned and domain knowledge in the form of Horn clause rules. Learning must simultaneously search the space of rules and learn the parameters of the mention model. We accomplish this via an application of Expectation Maximization within a Markov Logic framework. An experimental evaluation on synthetic and natural text data shows that the method can learn accurate rules and apply them to new texts to make correct inferences. Experiments also show that the method out-performs the standard EM approach that assumes mentions are missing at random.

[1]  Thomas G. Dietterich,et al.  Learning Rules from Incomplete Examples via Implicit Mention Models , 2011, ACML.

[2]  Leslie G. Valiant,et al.  A First Experimental Demonstration of Massive Knowledge Infusion , 2008, KR.

[3]  Oren Etzioni,et al.  Open Information Extraction from the Web , 2007, CACM.

[4]  M. Eisenstein Reading between the lines , 2009, Nature Methods.

[5]  Automatic Content Extraction 2008 Evaluation Plan ( ACE 08 ) Assessment of Detection and Recognition of Entities and Relations Within and Across Documents , 2008 .

[6]  William W. Cohen WHIRL: A word-based information representation language , 2000, Artif. Intell..

[7]  Estevam R. Hruschka,et al.  Toward an Architecture for Never-Ending Language Learning , 2010, AAAI.

[8]  Matthew Richardson,et al.  Markov logic networks , 2006, Machine Learning.

[9]  Siobhan Chapman Logic and Conversation , 2005 .

[10]  Marjorie Freedman,et al.  Empirical Studies in Learning to Read , 2010, HLT-NAACL 2010.

[11]  Raymond J. Mooney,et al.  A Mutually Beneficial Integration of Data Mining and Information Extraction , 2000, AAAI/IAAI.

[12]  J L Schafer,et al.  Multiple Imputation for Multivariate Missing-Data Problems: A Data Analyst's Perspective. , 1998, Multivariate behavioral research.

[13]  Matthew Richardson,et al.  The Alchemy System for Statistical Relational AI: User Manual , 2007 .

[14]  Oren Etzioni,et al.  Learning First-Order Horn Clauses from Web Text , 2010, EMNLP.

[15]  Estevam R. Hruschka,et al.  Coupled semi-supervised learning for information extraction , 2010, WSDM '10.