Multiple Choice Question (MCQ) Answering for Machine Reading Evaluation

The article presents the experiments carried out as part of the participation in the main task (English dataset) of QA4MRE@CLEF 2013. In the developed system, we first combine the question Q and each candidate answer option A to form (Q , A) pair. Each pair has been considered a Hypothesis (H). We have used Morphological Expansion to rebuild the H. Then, each H has been verified by assigning a matching score. Stop words and interrogative words are removed from each H and query words are identified to retrieve the most relevant sentences from the associated document using Lucene. Relevant sentences are retrieved from the associated document based on the TF-IDF of the matching query words along with n-gram overlap of the sentence with the H. Each retrieved sentence defines the Text T. Each T-H pair is assigned a ranking score that works on textual entailment principle. The inference weight i.e., matching score has automatically been assigned to each answer options based on their inference matching. Each sentence in the associated document has contributed an inference score to each H. The candidate answer option that receives the highest inference score has been identified as the most relevant option and selected as the answer to the given question.