Automatically Extracting Information Needs from Ad Hoc Clinical Questions

Automatically extracting information needs from ad hoc clinical questions is an important step towards medical question answering. In this work, we first explored supervised machine-learning approaches to automatically classify an ad hoc clinical question into general topics. We then evaluated different methods for automatically extracting keywords from an ad hoc clinical question. Our methods were evaluated on the 4,654 clinical questions maintained by the National Library of Medicine. Our best systems or methods showed F-score of 76% for the task of question-topic classification and an average F-score of 56% for extracting keywords from ad hoc clinical questions.

[1]  Chen Yuan Disambiguating Biomedical Abbreviations Based on K-Nearest Neighbor with Weighted Voting Method , 2008 .

[2]  Jerome A Osheroff,et al.  Research Paper: Answering Physicians' Clinical Questions: Obstacles and Potential Solutions , 2005, J. Am. Medical Informatics Assoc..

[3]  Stephen B. Johnson,et al.  Generic queries for meeting clinical information needs. , 1993, Bulletin of the Medical Library Association.

[4]  Clarence D Kreiter,et al.  An evaluation of information-seeking behaviors of general pediatricians. , 2004, Pediatrics.

[5]  George Hripcsak,et al.  Development, implementation, and a cognitive evaluation of a definitional question answering system for physicians , 2007, J. Biomed. Informatics.

[6]  Jimmy J. Lin,et al.  Evaluation of PICO as a Knowledge Representation for Clinical Questions , 2006, AMIA.

[7]  D. Lindberg,et al.  The Unified Medical Language System , 1993, Methods of Information in Medicine.

[8]  J. L. Moore,et al.  Lifelong self-directed learning using a computer database of clinical questions. , 1997, The Journal of family practice.

[9]  Hong Yu,et al.  Building a Foundation System for Producing Short Answers to Factual Questions , 2002, TREC.

[10]  M. Ebell,et al.  Obstacles to answering doctors' questions about patient care with evidence: qualitative study , 2002, BMJ : British Medical Journal.

[11]  Stephen B. Johnson,et al.  Scenario-based Assessment of Physicians' Information Needs , 2004, MedInfo.

[12]  James Jungho Pak,et al.  2 , 2009, NEMS.

[13]  Sanda M. Harabagiu,et al.  Answering complex questions with random walk models , 2006, SIGIR '06.

[14]  Yiming Yang,et al.  A Comparative Study on Feature Selection in Text Categorization , 1997, ICML.

[15]  P. Gorman,et al.  A taxonomy of generic clinical questions: classification study , 2000, BMJ : British Medical Journal.

[16]  M. Ebell,et al.  Analysis of questions asked by family doctors regarding patient care , 1999, BMJ.

[17]  Ulf Hermjakob,et al.  Parsing and Question Classification for Question Answering , 2001, ACL 2001.

[18]  Hong Yu,et al.  Being Erlang Shen : Identifying Answerable Questions , 2005 .

[19]  Alan R. Aronson,et al.  Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program , 2001, AMIA.