Evaluating the evaluation tools: methodological issues in the FAST project

Assessment is now understood to be a key issue in influencing what and how students learn and, through the feedback they receive, their understanding and future learning. The Formative Assessment in Science Teaching (FAST) project is an FDTL funded collaboration between the Open University and Sheffield Hallam University. The aims of the project are: • to investigate the impact of existing formative assessment practices on student learning behaviour • to develop, implement and evaluate new approaches to providing students with timely and useful feedback. The theoretical foundation of the FAST project is that there are 11 conditions under which assessment best supports student learning (Gibbs and Simpson, 2004). Derived from a comprehensive literature review of theories and case studies of assessment, these 11 conditions form the conceptual framework for the project, and for the evaluation tools developed by the project team. This paper seeks to evaluate the usefulness of the principal evaluation tool used in the FAST project: the Assessment Experience Questionnaire (AEQ). The AEQ has been used extensively in the FAST project, and increasingly in other institutions, and is designed as a diagnostic tool for lecturers to assess the extent to which students experience the 11 conditions in assessment. The AEQ uses six scales of six items, each addressing at least one of the conditions: 1. Time demands and distribution of effort 2. Assignments and learning 3. Quantity and timing of feedback 4. Quality of feedback 5. Student use of feedback 6. The examination Drawing on interviews with students and lecturers, and questionnaire findings over three years, this paper discusses the practical application, and limitations, of the AEQ as an evaluation tool. Using comparisons with other tools developed by the FAST project, it also seeks to address the methodological issues raised by the AEQ, and suggests ways in which the AEQ, in conjunction with other methods, can be used as a means of better understanding assessment practices.

[1]  Patricia M. Lyon,et al.  The Use of the Course Experience Questionnaire as a Monitoring Evaluation Tool in a Problem-based Medical Programme , 2002 .

[2]  P. Zeegers,et al.  A Revision of the Biggs' Study Process Questionnaire (R-SPQ) , 2002 .

[3]  Claire Simpson,et al.  Measuring the response of students to assessment: the Assessment Experience Questionnaire , 2003 .

[4]  P. Ramsden A performance indicator of teaching quality in higher education: The Course Experience Questionnaire , 1991 .

[5]  John B. Mitchell,et al.  Web-based student evaluations of professors: the relations between perceived quality, easiness and sexiness , 2004 .

[6]  Roy Ballantyne,et al.  Beyond Student Evaluation of Teaching: Identifying and addressing academic staff development needs , 2000 .

[7]  P. Abrami,et al.  Students' Evaluations of University Teaching: Research Findings, Methodological Issues, and Directions for Future Research , 1987 .

[8]  Margaret Rangecroft,et al.  Bridging the gap: an alternative tool for course evaluation , 2005 .

[9]  Kam-por Kwan,et al.  How Fair are Student Ratings in Assessing the Teaching Performance of University Teachers , 1999 .

[10]  John Woodhouse,et al.  But is it Fair , 2002 .

[11]  L. McDowell,et al.  "BUT IS IT FAIR?" : AN EXPLORATORY STUDY OF STUDENT PERCEPTIONS OF THE CONSEQUENTIAL VALIDITY OF ASSESSMENT , 1997 .

[12]  D. Sluijsmans,et al.  The use of self-, peer and co-assessment in higher education: A review , 1999 .

[13]  F. Dochy,et al.  Students’ perceptions about evaluation and assessment in higher education: a review1 , 2005 .

[14]  Mark Nichols,et al.  Evaluating Flexible Delivery across a Tertiary Institution , 2002 .

[15]  Evelyn Brown,et al.  Evaluation tools for investigating the impact of assessment regimes on student learning , 2003 .