Adaptive assessment in the class of programming

This paper presents PAT (Programming Adaptive Testing), a Web-based adaptive testing system for assessing students’ programming knowledge. PAT was used in two high school programming classes by 73 students. The question bank of PAT consists of 443 questions. A question is classified in one out of three difficulty levels. In PAT the levels of difficulties are adapted to Bloom’s taxonomy lower levels and students are examined in their cognitive domain. This means that PAT has been designed according to pedagogical theories in order to be appropriate for the needs of the course “Application Development in a Programming Environment”. If a student answers a question correctly a harder question is presented, otherwise an easier one. Easy questions examine the student’s knowledge, while difficult questions examine the student’s skills to apply prior knowledge to new problems. A student answers a personalized test consisting of 30 questions. PAT classifies a student in one out of three programming skills’ levels. It can predict the corresponding classification of students in Greek National Exams. Furthermore, it can be helpful to both students and teachers. A student could discover his/her programming shortcomings. Similarly, a teacher could objectively assess his/her students as well discover the subjects that need to be repeated.

[1]  Peter Brusilovsky,et al.  Engaging students to work with self-assessment questions: a study of two approaches , 2005, ITiCSE '05.

[2]  Errol Thompson,et al.  Bloom's taxonomy for CS assessment , 2008, ACE '08.

[3]  Dave Oliver,et al.  First Year Courses in IT: A Bloom Rating , 2007, J. Inf. Technol. Educ..

[4]  Guy Tremblay,et al.  Oto, a generic and extensible tool for marking programming assignments , 2008, Softw. Pract. Exp..

[5]  Denise M. Woit,et al.  Effectiveness of online assessment , 2003, SIGCSE.

[6]  Timothy L. Wilson,et al.  Improving Multiple-Choice Questioning: Preparing Students for Standardized Tests , 1991 .

[7]  Hussein Suleman Automatic marking with Sakai , 2008, SAICSIT '08.

[8]  Raymond Lister Teaching Java First: Experiments with a Pigs-Early Pedagogy , 2004, ACE.

[9]  B. Bloom,et al.  Taxonomy of Educational Objectives. Handbook I: Cognitive Domain , 1966 .

[10]  Denise M. Woit,et al.  Integrating technology into computer science examinations , 1998, SIGCSE '98.

[11]  Kenneth A. Reek,et al.  The TRY system -or- how to avoid testing student programs , 1989, SIGCSE '89.

[12]  J. Carter,et al.  How shall we assess this? , 2003, ITiCSE-WGR '03.

[13]  Raymond Lister Objectives and objective assessment in CS1 , 2001, SIGCSE '01.

[14]  Allan Jones Setting Objective Tests. , 1997 .

[15]  John Waldron,et al.  Assessing the assessment of programming ability , 2004, SIGCSE '04.

[16]  Mary Goodwin,et al.  Testing skills and knowledge: introducing a laboratory exam in CS1 , 2002, SIGCSE.

[17]  Bill Z. Manaris,et al.  Bloom's taxonomy revisited: specifying assessable learning objectives in computer science , 2008, SIGCSE '08.

[18]  David Hovemeyer,et al.  Experiences with marmoset: designing and using an advanced submission and testing system for programming courses , 2006, ITICSE '06.

[19]  B. Bloom Taxonomy of educational objectives , 1956 .

[20]  Stewart A. Denenberg Test construction and administration strategies for large introductory courses , 1981, SIGCSE '81.

[21]  D. Kolb Experiential Learning: Experience as the Source of Learning and Development , 1983 .

[22]  R. Kolstad,et al.  Applications of Conventional and Non-Restrictive Multiple-Choice Examination Items. , 1982 .

[23]  Rodina Ahmad,et al.  Assessing object-oriented programming skills in the core education of computer science and information technology: introducing new possible approach , 2008 .

[24]  Kotaro Nawa Automatic plagiarism detection , 2010 .

[25]  Nathan Griffiths,et al.  The boss online submission and assessment system , 2005, JERC.

[26]  David M. Arnow,et al.  On-line programming examinations using Web to teach , 1999, ITiCSE '99.

[27]  John English Experience with a computer-assisted formal programming examination , 2002, ITiCSE '02.

[28]  Terry Scott Bloom's taxonomy applied to testing in computer science classes , 2003 .

[29]  Kumiko Tanaka-Ishii,et al.  EMMA: a web-based report system for programming course--automated verification and enhanced feedback , 2004, ITiCSE '04.

[30]  John Leaney,et al.  Introductory programming, criterion-referencing, and bloom , 2003, SIGCSE.

[31]  Tony Clear,et al.  An Australasian study of reading and comprehension skills in novice programmers, using the bloom and SOLO taxonomies , 2006 .

[32]  Peter Bancroft,et al.  Managing Large Class Assessment , 2004, ACE.

[33]  Gottfried Vossen,et al.  Using Software Testing Techniques for Efficient Handling of Programming Exercises in an E-Learning Platform , 2006 .

[34]  Fu Lee Wang,et al.  Designing Programming Exercises with Computer Assisted Instruction , 2008, ICHL.

[35]  Charmain Cilliers,et al.  Proceedings of the 2008 annual research conference of the South African Institute of Computer Scientists and Information Technologists on IT research in developing countries: riding the wave of technology , 2008 .

[36]  J. Paul Gibson,et al.  Synthesis and analysis of automatic assessment methods in CS1: generating intelligent MCQs , 2005, SIGCSE.

[37]  David Jackson,et al.  Grading student programs using ASSYST , 1997, SIGCSE '97.

[38]  Susan Bergin,et al.  Automated assessment in CS1 , 2006 .

[39]  David Jackson A semi-automated approach to online assessment , 2000, ITiCSE '00.

[40]  Fu Lee Wang,et al.  Design and Implementation of an Automated System for Assessment of Computer Programming Assignments , 2007, ICWL.

[41]  Kirsti Ala-Mutka,et al.  A Survey of Automated Assessment Approaches for Programming Assignments , 2005, Comput. Sci. Educ..

[42]  Chung Keung Poon,et al.  Experiences with PASS: Developing and Using a Programming Assignment aSsessment System , 2006, 2006 Sixth International Conference on Quality Software (QSIC'06).

[43]  Katrin Becker Grading programming assignments using rubrics , 2003, ITiCSE '03.

[44]  John Leaney,et al.  First Year Programming: Let All the Flowers Bloom , 2003, ACE.

[45]  R. E. Berry,et al.  A style analysis of C programs , 1985, CACM.

[46]  Tommi Reinikainen,et al.  ALOHA - a grading tool for semi-automatic assessment of mass programming courses , 2006, Baltic Sea '06.

[47]  Gwo-Jen Hwang,et al.  A web-based programming learning environment to support cognitive development , 2008, Interact. Comput..

[48]  Tommi Reinikainen,et al.  Improving pedagogical feedback and objective grading , 2008, SIGCSE '08.