Automatic Essay Grading System for Short Answers in English Language

Automatic Essay Grading (AEG) system is defined as the computer technology that evaluates and grades written prose. The short essay answer, where the es say is written in short sentences where it has two types the open ended short answer and the close ended sho rt answer where it is our research domain based on the computer subject. The Marking of short essay answers automatically is one of the most complicated domains because it is relying heavily on the semant ic similarity in meaning refers to the degree to wh ich two sentences are similar in the meaning where both used similar words in the meaning, in this case Humans are able to easily judge if a concepts are r elated to each other, there for is a problem when S tudent use a synonym words during the answer in case they forget the target answer and they use their alterna tive words in the answer which will be different from th e Model answer that prepared by the structure. The Standard text similarity measures perform poorly on such tasks. Short answer only provides a limited content, because the length of the text is typicall y short, ranging from a single word to a dozen word s. This research has two propose; the first propose is Alternative Sentence Generator Method in order to generate the alternative model answer by connecting the method with the synonym dictionary. The second proposed three algorithms combined together in matching phase, Commons Words (COW), Longest Common Subsequence (LCS) and Semantic Distance (SD), these algorithms have been successfully used in many Natural Language Processi ng systems and have yielded efficient results. The system was manually tested on 40 questions answered by three students and evaluated by teacher in class. The proposed system has yielded %82 corre lation-style with human grading, which has made the system significantly better than the other stat e of the art systems.

[1]  Ricardo Olmos,et al.  Using latent semantic analysis to grade brief summaries: A study exploring texts at different academic levels , 2013, Lit. Linguistic Comput..

[2]  Zuhair Bandar,et al.  Benchmarking short text semantic similarity , 2010, Int. J. Intell. Inf. Database Syst..

[3]  Rada Mihalcea,et al.  Text-to-Text Semantic Similarity for Automatic Short Answer Grading , 2009, EACL.

[4]  Rada Mihalcea,et al.  Learning to Grade Short Answer Questions using Semantic Similarity Measures and Dependency Graph Alignments , 2011, ACL.

[5]  Wanpeng Song Applications of short text similarity assessment in user-interactive question answering , 2010 .

[6]  Patrick Pantel,et al.  From Frequency to Meaning: Vector Space Models of Semantics , 2010, J. Artif. Intell. Res..

[7]  Benjamin Swanson,et al.  Correction Detection and Error Type Selection as an ESL Educational Aid , 2012, HLT-NAACL.

[8]  Deepty Dubey,et al.  Query Optimisation using Natural Language Processing , 2012 .

[9]  Kinshuk,et al.  Auto-Assessor: Computerized Assessment System for Marking Student's Short-Answers Automatically , 2011, 2011 IEEE International Conference on Technology for Education.

[10]  Helen Yannakoudakis,et al.  A New Dataset and Method for Automatically Grading ESOL Texts , 2011, ACL.

[11]  John Blackmore,et al.  Proceedings of the Twenty-Second International FLAIRS Conference (2009) c-rater:Automatic Content Scoring for Short Constructed Responses , 2022 .

[12]  Wen Zhou,et al.  Sentence Similarity Measure Based on Events and Content Words , 2009, 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery.

[13]  Ramlan Mahmod,et al.  Automated Marking System for Short Answer examination (AMS-SAE) , 2009, 2009 IEEE Symposium on Industrial Electronics & Applications.

[14]  P. Shrestha,et al.  Corpus-Based methods for Short Text Similarity , 2011, JEPTALNRECITAL.

[15]  P. Selvi,et al.  Automatic Short -Answer Grading System (ASAGS) , 2010, ArXiv.