Automatic summary assessment for intelligent tutoring systems

Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to support automatic evaluation of summaries. However, their performance is not satisfactory for assessing summary writings. To improve the performance, this paper proposes an ensemble approach that integrates LSA and n-gram co-occurrence. As a result, the proposed ensemble approach is able to achieve high accuracy and improve the performance quite substantially compared with current techniques. A summary assessment system based on the proposed approach has also been developed.

[1]  Eduard H. Hovy,et al.  Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics , 2003, NAACL.

[2]  Enrique Alfonseca,et al.  UPPER BOUNDS OF THE BLEU ALGORITHM APPLIED TO ASSESSING STUDENT ESSAYS , .

[3]  George A. Miller,et al.  WordNet: A Lexical Database for the English Language , 2002 .

[4]  Russell G. Almond,et al.  Evaluating ACED: The Impact of Feedback and Adaptivity on Learning , 2007, AIED.

[5]  Chin-Yew Lin,et al.  ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.

[6]  Pedro M. Domingos Bayesian Averaging of Classifiers and the Overfitting Problem , 2000, ICML.

[7]  L. Vygotsky,et al.  Thought and Language , 1963 .

[8]  Takashi Yamauchi,et al.  Learning from human tutoring , 2001, Cogn. Sci..

[9]  Virginia Wheway Using boosting to simplify classification models , 2001, Proceedings 2001 IEEE International Conference on Data Mining.

[10]  Richard E. Clark Design Document for A Guided Experiential Learning Course 1 , 2008 .

[11]  Richard E. Clark,et al.  Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching , 2006 .

[12]  H. Chad,et al.  Intelligent Tutoring Systems : Prospects for Guided Practice and Efficient Learning , 2006 .

[13]  Michael J. Timms Using Item Response Theory (IRT) to select hints in an ITS , 2007, AIED.

[14]  Ann L. Brown,et al.  How people learn: Brain, mind, experience, and school. , 1999 .

[15]  Judy L. Olson,et al.  Teaching Children and Adolescents with Special Needs , 1992 .

[16]  Salim Roukos,et al.  Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.

[17]  Carlo Strapparava,et al.  Automatic Assessment of Students' Free-Text Answers Underpinned by the Combination of a BLEU-Inspired Algorithm and Latent Semantic Analysis , 2005, FLAIRS Conference.

[18]  Risto Miikkulainen,et al.  In Proceedings of the 19th Annual Conference of the Cognitive Science Society , 1997 .

[19]  Kuo-En Chang,et al.  The Effect of Concept Mapping to Enhance Text Comprehension and Summarization , 2002 .

[20]  Marita Franzke,et al.  Building Student Summarization, Writing and Reading Comprehension Skills With Guided Practice and Automated Feedback , 2006 .

[21]  Bob Rehder,et al.  How Well Can Passage Meaning be Derived without Using Word Order? A Comparison of Latent Semantic Analysis and Humans , 1997 .

[22]  Joseph E. Beck,et al.  Macroadapting Animalwatch to Gender and Cognitive Differnces with Respect to Hint Interactivity and Symbolism , 2000, Intelligent Tutoring Systems.

[23]  Peter W. Foltz,et al.  An introduction to latent semantic analysis , 1998 .

[24]  Beatrice Gralton,et al.  Washington DC - USA , 2008 .

[25]  Iraide Zipitria,et al.  From Human to Automatic Summary Evaluation , 2004, Intelligent Tutoring Systems.