The Compound Nature of Novice Programming Assessments

Failure rates in introductory programming courses are notoriously high, and researchers have noted that students struggle with the assessments that we typically use to evaluate programming ability. Current assessment practices in introductory courses consist predominantly of questions that involve a multitude of different concepts and facts. Students with fragile knowledge in any of the areas required may be unable to produce a working solution, even when they may know most of the required material. These assessments also make it difficult for a teacher to distinguish between the specific information that a student does and does not know. In this paper, we analyse examination questions used to assess novice programming at the syntax level and describe the extent to which each syntax component is used across the various examination questions. We also explore the degree to which questions involve multiple syntax elements as an indicator of how independently concepts are examined.

[1]  Ewan D. Tempero,et al.  All syntax errors are not equal , 2012, ITiCSE '12.

[2]  Raymond Lister,et al.  Longitudinal think aloud study of a novice programmer , 2014, ACE.

[3]  Anthony V. Robins,et al.  Learning edge momentum: a new account of outcomes in CS1 , 2010, Comput. Sci. Educ..

[4]  Simon,et al.  Mental models, consistency and programming aptitude , 2008, ACE '08.

[5]  Janet Rountree,et al.  Learning and Teaching Programming: A Review and Discussion , 2003, Comput. Sci. Educ..

[6]  Raymond Lister,et al.  Relationships between reading, tracing and writing skills in introductory programming , 2008, ICER '08.

[7]  Raymond Lister,et al.  Programming: reading, writing and reversing , 2014, ITiCSE '14.

[8]  Jacqueline L. Whalley,et al.  How difficult are novice code writing tasks?: a software metrics approach , 2014, ACE.

[9]  Tony Clear,et al.  Salient elements in novice solutions to code writing problems , 2011, ACE 2011.

[10]  Michael C. Loui,et al.  Setting the Scope of Concept Inventories for Introductory Computing Subjects , 2010, TOCE.

[11]  Andrew Luxton-Reilly,et al.  Enhancing syntax error messages appears ineffectual , 2014, ITiCSE '14.

[12]  Margaret Hamilton,et al.  A taxonomic study of novice programming summative assessment , 2009, ACE '09.

[13]  Kathi Fisler,et al.  Measuring the effectiveness of error messages designed for novice programmers , 2011, SIGCSE.

[14]  Lauri Malmi,et al.  CS minors in a CS1 course , 2008, ICER '08.

[15]  Andreas Stefik,et al.  An Empirical Investigation into Programming Language Syntax , 2013, TOCE.

[16]  Sue Fitzgerald,et al.  Ability to 'explain in plain english' linked to proficiency in computer-based programming , 2012, ICER '12.

[17]  Jens Bennedsen,et al.  Failure rates in introductory programming , 2007, SGCS.

[18]  Elliot Soloway,et al.  Novice mistakes: are the folk wisdoms correct? , 1986, CACM.

[19]  Daryl J. D'Souza,et al.  A comparative analysis of results on programming exams , 2013, ACE '13.

[20]  Daryl J. D'Souza,et al.  Assessment of programming: pedagogical foundations of exams , 2013, ITiCSE '13.

[21]  Daryl J. D'Souza,et al.  How difficult are exams?: a framework for assessing the complexity of introductory programming exams , 2013, ACE '13.

[22]  Jane Sinclair,et al.  Exploring societal factors affecting the experience and engagement of first year female computer science undergraduates , 2015, Koli Calling.

[23]  Andrew Luxton-Reilly,et al.  Learning to Program is Easy , 2016, ITiCSE.

[24]  Daryl J. D'Souza,et al.  Exploring programming assessment instruments: a classification scheme for examination questions , 2011, ICER.

[25]  Beth Simon,et al.  Evaluating a new exam question: Parsons problems , 2008, ICER '08.

[26]  James Skene,et al.  Introductory programming: examining the exams , 2012, ACE 2012.

[27]  Daniel Zingaro,et al.  Reviewing CS1 exam question content , 2011, SIGCSE '11.

[28]  Carsten Schulte,et al.  What do teachers teach in introductory programming? , 2006, ICER '06.

[29]  Ewan D. Tempero,et al.  Understanding the syntax barrier for novices , 2011, ITiCSE '11.

[30]  John Hamer,et al.  Coverage of course topics in a student generated MCQ repository , 2009, ITiCSE.

[31]  Daryl J. D'Souza,et al.  Can computing academics assess the difficulty of programming examination questions? , 2012, Koli Calling.

[32]  Raymond Lister,et al.  Not seeing the forest for the trees: novice programmers and the SOLO taxonomy , 2006, ITICSE '06.

[33]  Lauri Malmi,et al.  Why students drop out CS1 course? , 2006, ICER '06.

[34]  Frederick W. B. Li,et al.  Failure rates in introductory programming revisited , 2014, ITiCSE '14.

[35]  Brett A. Becker An Effective Approach to Enhancing Compiler Error Messages , 2016, SIGCSE.

[36]  Ewan D. Tempero,et al.  On the differences between correct student solutions , 2013, ITiCSE '13.

[37]  David Cordes,et al.  Analyzing syntax error patterns among novice programmers , 1997, ACM-SE 35.

[38]  Raymond Lister,et al.  The BRACElet 2009.1 (Wellington) specification , 2009, ACE '09.

[39]  Judy Kay,et al.  Toward a shared understanding of competency in programming: an invitation to the BABELnot project , 2012, ACE 2012.

[40]  Anthony Robins,et al.  Problem distributions in a CS1 course , 2006 .

[41]  Jacqueline L. Whalley,et al.  Measuring the difficulty of code comprehension tasks using software metrics , 2013, ACE '13.

[42]  Mark Guzdial,et al.  The FCS1: a language independent assessment of CS1 knowledge , 2011, SIGCSE.

[43]  J. Bennedsen,et al.  Assessing Process and Product - A Practical Lab Exam for an Introductory Programming Course , 2002, Proceedings. Frontiers in Education. 36th Annual Conference.