Automated Essay Marking – for both Style and Content

This paper covers the automated assessment of essays. Such marking requires the assessment of style, and where appropriate the assessment of content. The first part of this paper describes the assessment of style; while the second part proceeds to describe the assessment of content. Style The methodology used to assess style is based on a set of ‘common’ metrics, and requires some initial calibration. After briefly outlining the method of assessing style, this part concludes by posing these questions on some varied aspects on the use of a common metric set, namely: How valid is the computer marking of style? Could there be a standard metric set for common use? What constitutes a standard or an optimal metric set to use? What is the effect of the size of the calibration sample used? Content After the terms “usage” and “coverage” are defined one particular essay set is examined in detail. Detail in terms of usage and coverage will all be discussed. The schema for content does not require extensive development before the commencement of the assessment of content. No calibration is required for the methodology used to mark content, although the usual practise of taking a sample to verify the method would be recommended. Conclusion At the current stage of this development the methodology provides an expandable, flexible method for the marking of the essay content; and also provides a method for the marking of the essay style; both markings being produced by the computer. The essay set, that was used as a demonstration vehicle, will be used further to indicate the potential throughput that computer based marking may achieve.