Evaluating students' programs using automated assessment: a case study
暂无分享,去创建一个
This poster presents our experience of using automated assessment in a programming course given by the Department of Computer Science at Holon Institute of Technology (HIT). The course was given as a first year course as part of an engineering degree which introduces students to programming in C and which teaches them the basics of the imperative programming paradigm. About 200 students took the course in Autumn 2009. They were required to submit 3 programming assignments each of which contained 4 programming tasks. Most of the assignments were evaluated manually. Our poster presents our experience with 15 students who used an automated assessment system to submit one of their assignments. The system used was Checkpoint [1], an integrated automated assessment system developed by the first author, which generates automated feedback and evaluation for students' work. The system manages both homework assignments and formal examinations based on a range of different question types, including questions requiring free-text answers. It also allows automatically-generated marks to be manually moderated and adjusted, with feedback comments from the human moderator. Checkpoint has been in use since October 2005 at the University of Brighton in the UK for assessing two first year Java programming modules which comprise a total of about 150 students annually.
Students at Brighton are required to use Checkpoint to submit assignments from the very beginning of their course, and the two end-of-semester formal examinations are also administered using Checkpoint. The entire assessment structure is based around the ability to assess students on a 'little and often' basis, with assessment deadlines at fortnightly intervals throughout the year. However, students at Holon have no prior experience of automated assessment and it has not been used so far on any other courses within the institution. The assessment structure is therefore more traditional and involves fewer but larger assignments. The experiment reported here was carried out in order to evaluate the benefits of automated assessment following earlier work by the second author [2] and involved automating one of these assignments. The assignment was offered in two ways: a conventional manual submission and an automated submission via Checkpoint. The students were given the choice of submitting their work via either method.
Checkpoint was modified for this experiment to display the questions in Hebrew and to support feedback comments in Hebrew given by the human evaluator as part of the moderation process. Checkpoint was also modified to support C in addition to Java since this was the language the students were required to write their programs.
All the participants, both students and instructors, gave very positive feedback. The students had all been manually evaluated in previous assignments and their comments with regard to the differences were very positive. They said that the system was impartial and because it allowed them to submit many attempts before the deadline it enabled them to improve their programming skills. It also allowed the instructors to monitor student progress during the course of the assignment and to act to correct misconceptions.
[1] N. Ben-Zvi,et al. Automated evaluation methods with attention to individual differences-a study of a computer-based course in C , 2002, 32nd Annual Frontiers in Education.