Large Programming Task vs Questions-and-Answers Examination in Java Introductory Courses

This study investigates two different forms of examination in introductory Java programming courses: computer-based examination where the students are given one relatively large programming task only, and paper-/computer-based examination where the students are given a number of smaller questions to answer. The study focuses on identifying how well the two different examination forms reveal the students' practical skills and theoretical knowledge in relation to the intended learning outcomes specified in the course syllabus. Course syllabuses from 8 Swedish universities are examined and from these, 4 specified learning outcomes, each shared by at least 5 of the selected universities, are identified. These learning outcomes are then used to analyze the exams from two of the universities, one practicing the large programming task examination form, and the other is using the questions and answer examination form. For both examination forms, two course phases are analyzed with respect to the specified learning outcomes: 1) exam (e.g. what do the questions capture and how are they formulated), and 2) execution (e.g. what do the students answer and how well are the intended purposes of the questions fulfilled). The results illustrate the strengths and weaknesses with the two examination forms. The results can be used as decision support when selecting examination form and, furthermore, to improve both forms to attain more comprehensive examinations, with respect to the specified learning outcomes.

[1]  E. Wenger Communities of Practice: Learning, Meaning, and Identity , 1998 .

[2]  Jens Bennedsen,et al.  Failure rates in introductory programming , 2007, SGCS.

[3]  Denise M. Woit,et al.  Effectiveness of online assessment , 2003, SIGCSE.

[4]  John Waldron,et al.  Assessing the assessment of programming ability , 2004, SIGCSE '04.

[5]  Mordechai Ben-aft,et al.  Constructivism in computer science education , 1998, SIGCSE '98.

[6]  Etienne Wenger,et al.  Communities of Practice: Learning, Meaning, and Identity , 1998 .

[7]  G. Boulton‐Lewis Teaching for quality learning at university , 2008 .

[8]  Anthony V. Robins,et al.  Learning edge momentum: a new account of outcomes in CS1 , 2010, Comput. Sci. Educ..

[9]  Daniel Zingaro,et al.  Reviewing CS1 exam question content , 2011, SIGCSE '11.

[10]  Kirsti Ala-Mutka,et al.  A study of the difficulties of novice programmers , 2005, ITiCSE '05.

[11]  Etienne Wenger,et al.  Situated Learning: Legitimate Peripheral Participation , 1991 .

[12]  J. Biggs,et al.  Teaching For Quality Learning At University , 1999 .

[13]  Denise M. Woit,et al.  Integrating technology into computer science examinations , 1998, SIGCSE '98.

[14]  Daryl J. D'Souza,et al.  Instructor perspectives of multiple-choice questions in summative assessment for novice programmers , 2010, Comput. Sci. Educ..

[15]  Mark Guzdial,et al.  Assessing fundamental introductory computing concept knowledge in a language independent manner , 2010 .

[16]  E. Wenger Communities of practice: Learning , 1998 .

[17]  Patricia Haden,et al.  What Are We Doing When We Assess Programming? , 2015, ACE.

[18]  James Skene,et al.  Introductory programming: examining the exams , 2012, ACE 2012.