Developing a research design for comparative evaluation of marking and feedback support systems

Marking and provision of formative feedback on student assessment items are essential but onerous and potentially error prone activities in teaching and learning. Marking and Feedback Support Systems (MFSS) aim to improve the efficiency and effectiveness of human (not automated) marking and provision of feedback, resulting in reduced marking time, improved accuracy of marks, improved student satisfaction with feedback, and improved student learning. This paper highlights issues in rigorous evaluation of MFSS, including potential confounding variables as well as ethical issues relating to fairness of actual student assessments during evaluation. To address these issues the paper proposes an evaluation research approach, which combines artificial evaluation in the form of a controlled field experiment with naturalistic evaluation in the form of a field study, with the evaluation to be conducted through the live application of the MFSS being evaluated on a variety of units, assessment items, and marking schemes. The controlled field experiment approach requires the assessment item for each student to be marked once each using each MFSS together with a manual (non-MFSS) marking control method. It also requires markers to use all the MFSS as well as the manual method. Through such a design, the results of the comparative evaluation will facilitate design-based education research to further develop MFSS with the overall goal of more efficient and effective assessment and feedback systems and practices to enhance teaching and learning.

[1]  Julia Wren,et al.  Improving Marking of Live Performances Involving Multiple Markers Assessing Different Aspects , 2010 .

[2]  D. Nicol,et al.  Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice , 2006 .

[3]  J. Cleland,et al.  Effectiveness versus efficacy: more than a debate over language. , 2003, The Journal of orthopaedic and sports physical therapy.

[4]  Alistair Campbell,et al.  Improving Assessment Outcomes Through the Application of Innovative Digital Technologies , 2011 .

[5]  Jay F. Nunamaker,et al.  Systems Development in Information Systems Research , 1990, J. Manag. Inf. Syst..

[6]  Salvatore T. March,et al.  Design and natural science research on information technology , 1995, Decis. Support Syst..

[7]  John G. Carlson,et al.  Stressors and Stress Reactions Among University Personnel , 2002 .

[8]  Brian C. Nelson,et al.  Design-based research strategies for studying situated learning in a multi-user virtual environment , 2004 .

[9]  Martin Bichler,et al.  Design science in information systems research , 2006, Wirtschaftsinf..

[10]  Mark R. Shortis,et al.  A review of the status of online, semi-automated marking and feedback systems , 2009 .

[11]  David Nicol Technology-supported Assessment: A Review of Research Author , 2008 .

[12]  Steven Burrows,et al.  An Evaluation of Semi-Automated, Collaborative Marking and Feedback Systems: Academic Staff Perspectives. , 2011 .

[13]  Xianhui Wang,et al.  Leveraging open source software and design based research principles for development of a 3D virtual learning environment , 2010, CSOC.

[14]  Samir Chatterjee,et al.  A Design Science Research Methodology for Information Systems Research , 2008 .

[15]  Thomas C. Reeves,et al.  Design research: A socially responsible approach to instructional technology research in higher education , 2005, J. Comput. High. Educ..

[16]  John R. Venable,et al.  The role of theory and theorising in Design Science research , 2006 .

[17]  Allan Collins,et al.  Toward a Design Science of Education , 1992 .

[18]  John R. Venable,et al.  A framework for Design Science research activities , 2006 .

[19]  Abbie Brown,et al.  Design experiments: Theoretical and methodological challenges in creating complex interventions in c , 1992 .

[20]  Sonia Ferns Allocating academic workload for student consultation assessment and feedback , 2011 .

[21]  Heinz Dreher,et al.  Analytical assessment rubrics to facilitate semi-automated Essay grading and feedback provision , 2011 .

[22]  M. Garner,et al.  Is the feedback in higher education assessment worth the paper it is written on? Teachers' reflections on their practices , 2010 .

[23]  Paul B. Kantor,et al.  Cross-Evaluation: A new model for information system evaluation , 2006, J. Assoc. Inf. Sci. Technol..

[24]  D. Nicol,et al.  Rethinking technology-supported assessment practices in relation to the seven principles of good feedback practice. , 2006 .

[25]  Rosemary Skeele,et al.  Innovation in e-Assessment: Exploring a Multidimensional Tool , 2007 .

[26]  Design-Based Research: An Emerging Paradigm for Educational Inquiry , 2003 .

[27]  Chris Dede,et al.  If Design-Based Research is the Answer, What is the Question? A Commentary on Collins, Joseph, and Bielaczyc; diSessa and Cobb; and Fishman, Marx, Blumenthal, Krajcik, and Soloway in the JLS Special Issue on Design-Based Research , 2004 .

[28]  Alistair Bruce Campbell,et al.  Performance enhancement of the task assessment process through the application of an electronic performance support system , 2008 .