Automated marking of assignments consisting of written text would doubtless be of advantage to teachers and education administrators alike. When large numbers of assignments are submitted at once, teachers find themselves bogged down in their attempt to provide consistent evaluations and high quality feedback to students within as short a timeframe as is reasonable, usually a matter of days rather than weeks. Educational administrators are also concerned with quality and timely feedback, but in addition must manage the cost of doing this work. Clearly an automated system would be a highly desirable addition to the educational tool-kit, particularly if it can provide less costly and more effective outcome. In this paper we present a description and evaluation of four automated essay grading systems. We then report on our trial of one of these systems which was undertaken at Curtin University of Technology in the first half of 2001. The purpose of the trial was to assess whether automated essay grading was feasible, economically viable and as accurate as manually grading the essays. Within the Curtin Business School we have not previously used automated grading systems but the benefit could be enormous given the very large numbers of students in some first year subjects.
[1]
William Wresch,et al.
The Imminence of Grading Essays by Computer-25 Years Later
,
1993
.
[2]
W. Bruce Croft,et al.
TREC and Tipster Experiments with Inquery
,
1995,
Inf. Process. Manag..
[3]
E. B. Page.
Computer Grading of Student Prose, Using Modern Concepts and Software
,
1994
.
[4]
Peter W. Foltz,et al.
Latent semantic analysis for text-based research
,
1996
.
[5]
Leah S. Larkey,et al.
Automatic essay grading using text categorization techniques
,
1998,
SIGIR '98.
[6]
Peter W. Foltz,et al.
An introduction to latent semantic analysis
,
1998
.
[7]
M. E. Maron,et al.
Automatic Indexing: An Experimental Inquiry
,
1961,
JACM.
[8]
Martin Chodorow,et al.
Enriching Automated Essay Scoring Using Discourse Marking
,
2001
.