Abraham Itty heriah, Martin Franz, Wei-Jing Zhu, Adwait Ratnaparkhi P.O.Box 218, Yorktown Heights, NY 10598 fabei,franzm,wjzhu,adwaitrg watson.ibm. om Ri hard J. Mammone Dept. of Ele tri al Engineering, Rutgers University, Pis ataway, NJ 08854 mammone aip.rutgers.edu Abstra t We des ribe the IBM Statisti al Question Answering for TREC-9 system in detail and look at several examples and errors. The system is an appli ation of maximum entropy lassi ation for question/answer type predi tion and named entity marking. We des ribe our system for information retrieval whi h in the rst step did do ument retrieval from a lo al en y lopedia, and in the se ond step performed an expansion of the query words and nally did passage retrieval from the TREC olle tion. We will also dis uss the answer sele tion algorithm whi h determines the best senten e given both the question and the o urren e of a phrase belonging to the answer lass desired by the question. Results at the 250 byte and 50 byte levels for the overall system as well as results on ea h sub omponent are presented. 1 System Des ription Systems that perform question answering automati ally by omputer have been around for some time as des ribed by (Green et al., 1963). Only re ently though have systems been developed to handle huge databases and a slightly ri her set of questions. The types of questions that an be dealt with today are restri ted to be short answer fa t based questions. In TREC-8, a number of sites parti ipated in the rst question-answering evaluation (Voorhees and Ti e, 1999) and the best systems identi ed four major subomponents: Question/Answer Type Classi ation Query expansion/Information Retrieval Named Entity Marking Answer Sele tion Our system ar hite ture for this year was built around these four major omponents as shown in Fig. 1. Here, the question is input and lassi ed as asking for an answer whose ategory is one of the named entity lasses to be des ribed below. Additionally, the question is presented to the information retrieval (IR) engine for query expansion and do ument retrieval. This engine, given the query, looks at the database of do uments and outputs the best do uments or passages annotated with the named entities. The nal stage is to sele t the exa t answer, given the information about the answer lass and the top s oring passages. Minimizing various distan e metri s applied over phrases or windows of text results in the best s oring se tion that has a phrase belonging to answer lass. This then represents the best s oring answer.