Textual resource acquisition and engineering

A key requirement for high-performing question-answering (QA) systems is access to high-quality reference corpora from which answers to questions can be hypothesized and evaluated. However, the topic of source acquisition and engineering has received very little attention so far. This is because most existing systems were developed under organized evaluation efforts that included reference corpora as part of the task specification. The task of answering Jeopardy!™ questions, on the other hand, does not come with such a well-circumscribed set of relevant resources. Therefore, it became part of the IBM Watson™ effort to develop a set of well-defined procedures to acquire high-quality resources that can effectively support a high-performing QA system. To this end, we developed three procedures, i.e., source acquisition, source transformation, and source expansion. Source acquisition is an iterative development process of acquiring new collections to cover salient topics deemed to be gaps in existing resources based on principled error analysis. Source transformation refers to the process in which information is extracted from existing sources, either as a whole or in part, and is represented in a form that the system can most easily use. Finally, source expansion attempts to increase the coverage in the content of each known topic by adding new information as well as lexical and syntactic variations of existing information extracted from external large collections. In this paper, we discuss the methodology that we developed for IBM Watson for performing acquisition, transformation, and expansion of textual resources. We demonstrate the effectiveness of each technique through its impact on candidate recall and on end-to-end QA performance.

[1]  Jimmy J. Lin,et al.  Web question answering: is more always better? , 2002, SIGIR '02.

[2]  Jennifer Chu-Carroll,et al.  Special Questions and techniques , 2012, IBM J. Res. Dev..

[3]  Charles L. A. Clarke,et al.  The impact of corpus size on question answering performance , 2002, SIGIR '02.

[4]  Stephen E. Robertson,et al.  Effective site finding using link anchor information , 2001, SIGIR '01.

[5]  Jennifer Chu-Carroll,et al.  Statistical source expansion for question answering , 2011, CIKM '11.

[6]  Julian Kupiec,et al.  MURAX: a robust linguistic approach for question answering using an on-line encyclopedia , 1993, SIGIR.

[7]  Ellen M. Voorhees,et al.  Overview of the TREC 2004 Novelty Track. , 2005 .

[8]  Aditya Kalyanpur,et al.  Typing candidate answers using type coercion , 2012, IBM J. Res. Dev..

[9]  Jimmy J. Lin,et al.  Integrating Web-based and Corpus-based Techniques for Question Answering , 2003, TREC.

[10]  Amit Singhal,et al.  Document expansion for speech retrieval , 1999, SIGIR '99.

[11]  John O'Connor,et al.  Citing statements: Computer recognition and use to improve retrieval , 1982, Inf. Process. Manag..

[12]  Valentin Jijkoun,et al.  Overview of the CLEF 2007 Multilingual Question Answering Track , 2007, CLEF.

[13]  Gilad Mishne,et al.  Using Wikipedia at the TREC QA Track , 2004, TREC.

[14]  Charles L. A. Clarke,et al.  Exploiting redundancy in question answering , 2001, SIGIR '01.

[15]  James Fan,et al.  Textual evidence gathering and analysis , 2012, IBM J. Res. Dev..

[16]  Jennifer Chu-Carroll,et al.  Finding needles in the haystack: Search and candidate generation , 2012, IBM J. Res. Dev..

[17]  Niranjan Balasubramanian,et al.  Automatic generation of topic pages using query-based aspect models , 2009, CIKM.

[18]  Aditya Kalyanpur,et al.  Automatic knowledge extraction from documents , 2012, IBM J. Res. Dev..

[19]  Siddharth Patwardhan,et al.  Structured data and inference in DeepQA , 2012, IBM J. Res. Dev..

[20]  Hsin-Hsi Chen,et al.  Overview of the NTCIR-6 Cross-Lingual Question Answering (CLQA) Task , 2007, NTCIR.