Special Questions and techniques

Jeopardy!™ questions represent a wide variety of question types. The vast majority are Standard Jeopardy! Questions, where the question contains one or more assertions about some unnamed entity or concept, and the task is to identify the described entity or concept. This style of question is a representative of a wide range of common question-answering tasks, and the bulk of the IBM Watsoni system is focused on solving this problem. A small percentage of Jeopardy! questions require a specialized procedure to derive an answer or some derived assertion about the answer. We call any question that requires such a specialized computational procedure, selected on the basis of a unique classification of the question, a Special Jeopardy! Question. Although Special Questions per se are typically less relevant in broader question-answering applications, they are an important class of question to address in the Jeopardy! context. Moreover, the design of our Special Question solving procedures motivated architectural design decisions that are applicable to general open-domain question-answering systems. We explore these rarer classes of questions here and describe and evaluate the techniques that we developed to solve these questions.

[1]  Jennifer Chu-Carroll,et al.  Identifying implicit relationships , 2012, IBM J. Res. Dev..

[2]  Aditya Kalyanpur,et al.  Typing candidate answers using type coercion , 2012, IBM J. Res. Dev..

[3]  Siddharth Patwardhan,et al.  Structured data and inference in DeepQA , 2012, IBM J. Res. Dev..

[4]  Jennifer Chu-Carroll,et al.  Finding needles in the haystack: Search and candidate generation , 2012, IBM J. Res. Dev..

[5]  George A. Miller,et al.  WordNet: A Lexical Database for English , 1995, HLT.

[6]  Luo Si,et al.  Probabilistic models for answer-ranking in multilingual question-answering , 2010, TOIS.

[7]  James Fan,et al.  Textual evidence gathering and analysis , 2012, IBM J. Res. Dev..

[8]  Aditya Kalyanpur,et al.  A framework for merging and ranking of answers in DeepQA , 2012, IBM J. Res. Dev..

[9]  Bernardo Magnini,et al.  Is It the Right Answer? Exploiting Web Redundancy for Answer Validation , 2002, ACL.

[10]  Siddharth Patwardhan,et al.  Fact-based question decomposition in DeepQA , 2012, IBM J. Res. Dev..

[11]  Siddharth Patwardhan,et al.  Question analysis: How Watson reads a clue , 2012, IBM J. Res. Dev..

[12]  Yuriy Brun,et al.  That's What She Said: Double Entendre Identification , 2011, ACL.

[13]  Jens Lehmann,et al.  DBpedia - A crystallization point for the Web of Data , 2009, J. Web Semant..

[14]  Boris Katz,et al.  Syntactic and Semantic Decomposition Strategies for Question Answering from Multiple Resources * , 2005 .

[15]  Carlo Strapparava,et al.  LEARNING TO LAUGH (AUTOMATICALLY): COMPUTATIONAL MODELS FOR HUMOR RECOGNITION , 2006, Comput. Intell..

[16]  Siddharth Patwardhan,et al.  Fact-based question decomposition for candidate answer re-ranking , 2011, CIKM '11.

[17]  Jason Eisner,et al.  Lexical Semantics , 2020, The Handbook of English Linguistics.

[18]  Ruli Manurung,et al.  A practical application of computational humour , 2007 .