Improving the standard and consistency of multi-tutor grading in large classes

For several years the authors have coordinated a large engineering design subject, having a typical cohort of more than 300 students per semester. Lectures are supported by tutorials of approximately 32 students that incorporate a combination of collaborative team and project-based learning activities. Each tutor is responsible for grading the assessment tasks for students in their tutorial. A common issue is how to achieve a consistent standard of marking and student feedback between different tutors. To address this issue the authors have used a number of methods including double-blind marking and/or random re-marking to support consistent grading. However, even when only small variations between the overall grading of different tutors were found, students still complained about a perceived lack of consistency. In this paper we report on an investigation into the use of a collaborative peer learning process among tutors to improve mark standardisation, and marker consistency, and to build tutors’ expertise and capacity in the provision of quality feedback. We found that students’ perceptions of differences in grading were exacerbated by inconsistencies in the language tutors use when providing feedback, and by differences in tutors’ perceptions of how well individual criterion were met.

[1]  Naomi Rosh White,et al.  Tertiary Education in the Noughties: The Student Perspective. , 2006 .

[2]  M. Weaver,et al.  Do students value feedback? Student perceptions of tutors’ written responses , 2006 .

[3]  Anthony G. Greenwald,et al.  No Pain , No Gain ? The Importance of Measuring Course Workload in Student Ratings of Instruction , 1997 .

[4]  Anne Gardner,et al.  Perceived differences in tutor grading in large classes: Fact or fiction? , 2010, 2010 IEEE Frontiers in Education Conference (FIE).

[5]  Anne Gardner,et al.  Improving self‐ and peer assessment processes with technology , 2009 .

[6]  John A. Kaliski,et al.  Improving the Efficiency and Effectiveness of Grading Through the Use of Computer-Assisted Grading Rubrics , 2008 .

[7]  Herbert W. Marsh,et al.  Effects of Grading Leniency and Low Workload on Students' Evaluations of Teaching: Popular Myth, Bias, Validity, or Innocent Bystanders? , 2000 .

[8]  Osu Lilje,et al.  The structure, use and impact of the staff version of ORWET , 2012 .

[9]  Jim Freeman,et al.  Using portfolios for assessment: problems of reliability or standardisation? , 2007 .

[10]  Paul L Nesbit,et al.  Student justice perceptions following assignment feedback , 2006 .

[11]  K. Willey,et al.  Developing team skills with self- and peer assessment: Are benefits inversely related to team function? , 2009 .

[12]  M. Price,et al.  Assessment standards: the role of communities of practice and the scholarship of assessment , 2005 .

[13]  Margaret Jollands,et al.  Hearing each other - how can we give feedback that students really value , 2008 .

[14]  Meredith Lawley,et al.  The Implementation of an Automated Assessment Feedback and Quality Assurance System for ICT Courses , 2007, J. Inf. Syst. Educ..

[15]  Mir-Akbar Hessami,et al.  Improving written communication skills of students by providing effective feedback on laboratory reports , 2007 .

[16]  Anne Gardner,et al.  Changing student's perceptions of self and peer assessment , 2009 .

[17]  Chris Rust,et al.  A social constructivist assessment process model: how the research literature shows us this could be best practice , 2005 .