Preface to the special issue on automated assessment of programming assignments

Programming computers is becoming an increasingly popular activity, not only for computer science students but across a large number of other disciplines. But funding and other resources for teaching programming have not, for the most part, kept pace with the continuous growth in the number of students studying programming. Academic institutions face the challenge of providing their students with better quality teaching while minimizing the amount of additional work for staff. While traditional teaching methods can be enhanced by audio/visual means and advances in online learning, in the area of assessment the problems continue to persist. Computer-based assessment (CBA), which over the years has become an increasingly important teaching tool, can help educators solve these problems. For the past twenty years, educators have reported on the practical and pedagogic benefits of using automated assessment tools to assess student coursework in programming. The purpose of this issue of JERIC is to explain cutting-edge research on the automated assessment of student programs, and other kinds of programming assignments, to a wider audience. It is devoted to both fully automated assessment and to partial student program assessment by machine. We do not provide an overview of the field in this introductory statement. A recent survey by Ala-Mutka [Ala-Mutka 2005], the review of Douce et al. in this issue [Douce et al. 2006], and several other articles in this issue provide a comprehensive overview and offer various ways to classify the multitude of known work. However, to introduce the articles in this issue, the editors found it useful to distinguish three categories of systems: (1) to assess program-tracing skills; (2) to assess program-writing skills; and (3) assess intelligent programming tutors. The first group of systems attempts to assess students’ knowledge of programming language semantics by presenting students with a program and asking them to trace it. The answer to this type of problem results from its execution: What was printed? What was the final state of the variables in the data structures? The ability to automatically evaluate student answers stems from the system’s ability to execute the program or the algorithm with the same data and compare that result with the one entered by the student. The QuizPACK system reported by Brusilovsky and Sosnovsky [2006] in this issue _________________________________________________________________________________________

[1]  Lauri Malmi,et al.  Experiences on automatically assessed algorithm simulation exercises with different resubmission policies , 2005, JERC.

[2]  Christopher Douce,et al.  Automatic test-based assessment of programming: A review , 2005, JERC.

[3]  Amruth N. Kumar,et al.  A tutor for counter-controlled loop concepts and its evaluation , 2003, 33rd Annual Frontiers in Education, 2003. FIE 2003..

[4]  Tobias Lauer,et al.  Student-built algorithm visualizations for assessment: flexible generation, feedback and grading , 2005, ITiCSE '05.

[5]  Nathan Griffiths,et al.  The boss online submission and assessment system , 2005, JERC.

[6]  Peter Brusilovsky,et al.  Individualized exercises for self-assessment of programming knowledge: An evaluation of QuizPACK , 2005, JERC.

[7]  Amruth N. Kumar Generation of problems, answers, grade, and feedback---case study of a fully automated tutor , 2005, JERC.

[8]  Amruth N. Kumar Results from the evaluation of the effectiveness of an online tutor on expression evaluation , 2005, SIGCSE '05.

[9]  Kirsti Ala-Mutka,et al.  A Survey of Automated Assessment Approaches for Programming Assignments , 2005, Comput. Sci. Educ..

[10]  Michael T. Goodrich,et al.  PILOT: an interactive tool for learning and grading , 2000, SIGCSE '00.

[11]  Antonija Mitrovic,et al.  An Intelligent SQL Tutor on the Web , 2003, Int. J. Artif. Intell. Educ..

[12]  Amruth N. Kumar,et al.  A Tutor for Learning Encapsulation in C++ Classes , 2003 .

[13]  Amruth N. Kumar A tutor for using dynamic memory in C++ , 2002, 32nd Annual Frontiers in Education.

[14]  Peter Brusilovsky,et al.  ELM-ART: An Adaptive Versatile System for Web-based Instruction , 2001 .