Analytical assessment rubrics to facilitate semi-automated Essay grading and feedback provision

Assessment is an essential part of the learning process, both in formative learning settings and traditional summative assessment. Both types are challenging, as it can be difficult to ensure consistency, reliability and absence of bias. In formative assessment the problem of workload and timely results is even greater, as the task is carried out more frequently. Information technology is able to assist teachers in these challenges to various degrees depending on the type of test items. The essay test item, besides the well-known application to train language skills and to acquire foreign languages, is widely used to test higher order thinking skills and therefore it can be applied in a great variety of subject domains at different educational levels. Evaluating essays is a time-consuming task hence supporting technologies can deliver great advantages. In this paper we introduce a semi-automated approach to essay grading based on analytical assessment rubrics, the use of which facilitate feedback provision. A prototype system is described in terms of requirements derived from the authors' own experience and the published literature, a workflow model. Reflection of the development experience and user feedback informs further development of the system.

[1]  P. Sargent Two case studies , 1972 .

[2]  Jakob Nielsen,et al.  Usability engineering , 1997, The Computer Science and Engineering Handbook.

[3]  Peter W. Foltz,et al.  An introduction to latent semantic analysis , 1998 .

[4]  Ann L. Brown,et al.  How people learn: Brain, mind, experience, and school. , 1999 .

[5]  Christopher D. Manning,et al.  Enriching the Knowledge Sources Used in a Maximum Entropy Part-of-Speech Tagger , 2000, EMNLP.

[6]  Barbara M. Moskal,et al.  Scoring Rubrics: What, When and How? , 2000 .

[7]  Automated essay grading systems applied to a first year university subject: how can we do it better? , 2002 .

[8]  Joanna Bull,et al.  A Blueprint for Computer-Assisted Assessment , 2003 .

[9]  J. Carter,et al.  How shall we assess this? , 2003, ITiCSE-WGR '03.

[10]  R.S. Hall,et al.  An OSGi implementation and experience report , 2004, First IEEE Consumer Communications and Networking Conference, 2004. CCNC 2004..

[11]  Robert Williams The power of normalised word vectors for automatically grading essays , 2006 .

[12]  Clayton Lewis,et al.  TASK-CENTERED USER INTERFACE DESIGN A Practical Introduction , 2006 .

[13]  Patrick F. Reidy An Introduction to Latent Semantic Analysis , 2009 .

[14]  Mark R. Shortis,et al.  A review of the status of online, semi-automated marking and feedback systems , 2009 .

[15]  Yihong Wang,et al.  Online formative assessment using automated essay scoring technology in China and U.S.— Two case studies , 2010, 2010 2nd International Conference on Education Technology and Computer.