Multiple-choice vs free-text code-explaining examination questions

The BRACElet project has developed a number of code-explaining questions and thoroughly researched the novice programmer's difficulty in answering them correctly. In a prior study we explored whether students might perform better on multiple-choice code-explaining questions than on free-text code-explaining questions, and concluded that the multiple-choice form led to a perceptible but generally insignificant improvement in students' performance. However, that study compared different cohorts of students, leaving some doubt over its validity. In this study we seek to find a more definitive answer by including multiple-choice and free-text code-explaining questions in the same exam, and comparing the performances of individual students on the question pairs. We find that students perform substantially and significantly better on the multiple-choice questions. In the light of this finding we reconsider the question: when students cannot correctly describe the purpose of a small piece of code, is it because they do not understand the code; because they understand its detail but are unable to abstract that detail to determine the purpose; or because they understand the purpose but are unable to express it?

[1]  Angela Carbone,et al.  Going SOLO to assess novice programmers , 2008, SIGCSE 2008.

[2]  Mark Guzdial,et al.  A multi-national, multi-institutional study of assessment of programming skills of first-year CS students , 2001, ITiCSE-WGR '01.

[3]  Sue Fitzgerald,et al.  Ability to 'explain in plain english' linked to proficiency in computer-based programming , 2012, ICER '12.

[4]  Barbara Ericson,et al.  Introduction to Computing and Programming in Python - a Multimedia Approach, 4th Edition , 2016 .

[5]  Raymond Lister,et al.  Not seeing the forest for the trees: novice programmers and the SOLO taxonomy , 2006, ITICSE '06.

[6]  Raymond Lister,et al.  Relationships between reading, tracing and writing skills in introductory programming , 2008, ICER '08.

[7]  Peter Bancroft,et al.  Multiple Choice Questions Not Considered Harmful , 2005, ACE.

[8]  Simon,et al.  Explaining program code: giving students the answer helps - but only just , 2011, ICER.

[9]  Y. Benjamini,et al.  Controlling the false discovery rate: a practical and powerful approach to multiple testing , 1995 .

[10]  Robert McCartney,et al.  A multi-national study of reading and tracing skills in novice programmers , 2004, ITiCSE-WGR '04.

[11]  Daryl J. D'Souza,et al.  Instructor perspectives of multiple-choice questions in summative assessment for novice programmers , 2010, Comput. Sci. Educ..

[12]  Colin J. Fidge,et al.  Further evidence of a relationship between explaining, tracing and writing skills in introductory programming , 2009, ITiCSE.

[13]  Raymond Lister,et al.  The BRACElet 2009.1 (Wellington) specification , 2009, ACE '09.

[14]  Kevin F. Collis,et al.  Evaluating the Quality of Learning: The SOLO Taxonomy , 1977 .

[15]  Sue Fitzgerald,et al.  'explain in plain english' questions revisited: data structures problems , 2014, SIGCSE.

[16]  Simon A note on code-explaining examination questions , 2010 .

[17]  Sue Fitzgerald,et al.  'Explain in plain English' questions: implications for teaching , 2012, SIGCSE '12.

[18]  Raymond Lister,et al.  Early relational reasoning and the novice programmer: swapping as the hello world of relational reasoning , 2011, ACE 2011.

[19]  Leigh Ann Sudol-DeLyser,et al.  Code comprehension problems as learning events , 2012, ITiCSE '12.