A method for evaluating answers by comparing semantic information in a question and answer interaction

In this paper we describe a method for evaluating the validity of answers to questions related to the surface contents of a story in English in order to appropriately ascertain the learner’s comprehension state as part of a language learning tool. Since our interactive question and answer tool allows the input of free-form answers written in natural language, it is necessary to judge the validity of an answer by comparing semantic information in the answer text to that present in the story. As well as identifying discrepancies in these sets of semantic information and checking for the appropriateness of insertions, omissions, and substitutions in the response, our evaluation method attempts to eliminate ambiguities in the natural language processing results. In this paper we also present the results of an investigation into the range of answers to which our response judgment method is applicable as a means of evaluating the method. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(7): 84–97, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.20432