A review of an information extraction technique approach for automatic short answer grading

The requirement for automatic short answer grading (ASAG) system brings researchers to discover more knowledge about this field. Many techniques have been developed to reach the highest accuration. It can be processed by following stages: creating data set, pre-processing, model building, grading, and model evaluation. One of the techniques which commonly used is information extraction technique. Information extraction is a technique that employing finding fact on the student answers as patterns and then matches these to the teacher answer. The accuration is pointed out in computer and human raters agreement. The goal of this paper is to present a review of several ASAG research which using information extraction technique. However, this paper does not conclude the best method which can be used for general cases.

[1]  B. Everitt,et al.  Statistical methods for rates and proportions , 1973 .

[2]  Walt Detmar Meurers,et al.  Evaluating the Meaning of Answers to Reading Comprehension Questions: A Semantics-Based Approach , 2012, BEA@NAACL-HLT.

[3]  Maria T. Pazienza,et al.  Information Extraction , 2002, Lecture Notes in Computer Science.

[4]  Douglas E. Appelt,et al.  Introduction to Information Extraction , 1999, AI Commun..

[5]  Kinshuk,et al.  Auto-Assessor: Computerized Assessment System for Marking Student's Short-Answers Automatically , 2011, 2011 IEEE International Conference on Technology for Education.

[6]  Yasuyo Sawaki,et al.  A Reliable Approach to Automatic Assessment of Short Answer Free Responses , 2002, COLING.

[7]  Shourya Roy,et al.  A Perspective on Computer Assisted Assessment Techniques for Short Free-Text Answers , 2015, CAA.

[8]  R. Siddiqi,et al.  A systematic approach to the automated marking of short-answer questions , 2008, 2008 IEEE International Multitopic Conference.

[9]  G. Conole,et al.  A review of computer-assisted assessment , 2005 .

[10]  David Callear,et al.  CAA of Short Non-MCQ Answers , 2001 .

[11]  Walt Detmar Meurers,et al.  Evaluating Answers to Reading Comprehension Questions in Context: Results for German and the Role of Information Structure , 2011, TextInfer@EMNLP.

[12]  Stephen Pulman,et al.  Automarking: using computational linguistics to score short‚ free−text responses , 2003 .

[13]  Douglas E. Appelt,et al.  Introduction to Information Extraction Technology , 1999, IJCAI 1999.

[14]  Dezsö Sima,et al.  Intelligent short text assessment in eMax , 2007, AFRICON 2007.

[15]  Benno Stein,et al.  The Eras and Trends of Automatic Short Answer Grading , 2015, International Journal of Artificial Intelligence in Education.

[16]  Martin Chodorow,et al.  C-rater: Automated Scoring of Short-Answer Questions , 2003, Comput. Humanit..

[17]  Sally Jordan,et al.  Short-answer e-assessment questions: five years on , 2012 .

[18]  Sally E. Jordan,et al.  e-Assessment for learning? The potential of short-answer free-text questions with tailored feedback , 2009, Br. J. Educ. Technol..

[19]  Mark K. Singley,et al.  Open-ended approaches to science assessment using computers , 1995 .

[20]  Ramlan Mahmod,et al.  Automated Marking System for Short Answer examination (AMS-SAE) , 2009, 2009 IEEE Symposium on Industrial Electronics & Applications.

[21]  Pete G. Thomas,et al.  The evaluation of electronic marking of examinations , 2003, ITiCSE '03.

[22]  Stephen G. Pulman,et al.  Automatic Short Answer Marking , 2005, ACL 2005.