Assessing students' understanding of object structures

We present a theoretically derived and empirically tested competence model related to the concepts of "object state" and "references" that both form an important part of object-oriented programming. Our model characterizes different levels of programming capability with a focus on possible learning stages of beginning learners. It is based on the notion of understanding objects and their interaction with each other during the runtime of a program. Based on a hierarchical description of our theory, we derive a two-dimensional structure that separates the hierarchy into two facets "structure" (how are objects structured/stored) and "behaviour" (how do objects interact and access each other). Based on this, we have developed a set of items and collected data in a CS1 course (N = 195) to validate the item-set. We analyzed the data using a Rasch model to check item difficulty and the presence of different difficulty levels, and factor analysis to check the dimensions of the model. Furthermore, we argue for the validity of the items with the help of additional data collected from the students. The results indicate that our theoretical assumptions are correct and that the items will be usable with some minor modifications.

[1]  Iain Milne,et al.  Difficulties in Learning and Teaching Programming—Views of Students and Tutors , 2002, Education and Information Technologies.

[2]  Andreas Mühling,et al.  Levumi: A Web-Based Curriculum-Based Measurement to Monitor Learning Progress in Inclusive Classrooms , 2018, ICCHP.

[3]  Peter Hubwieser,et al.  Design and First Results of a Psychometric Test for Measuring Basic Programming Abilities , 2015, WiPSCE.

[4]  Christopher Watson,et al.  A systematic review of approaches for teaching introductory programming and their influence on success , 2014, ICER '14.

[5]  W Revelle,et al.  Very Simple Structure: An Alternative Procedure For Estimating The Optimal Number Of Interpretable Factors. , 1979, Multivariate behavioral research.

[6]  Rodolfo Azevedo,et al.  Developing a Computer Science Concept Inventory for Introductory Programming , 2016, SIGCSE.

[7]  Tim Futing Liao,et al.  Analysis of Multivariate Social Science Data , 2010 .

[8]  Ruven E. Brooks,et al.  Towards a Theory of the Comprehension of Computer Programs , 1983, Int. J. Man Mach. Stud..

[9]  Jürgen Börstler,et al.  TEACHING OBJECT ORIENTED MODELLING WITH CRC-CARDS AND ROLEPLAYING GAMES , 2005 .

[10]  Kent L. Beck,et al.  A laboratory for teaching object oriented thinking , 1989, OOPSLA '89.

[11]  Matthias Kramer,et al.  On the way to a test instrument for object-oriented programming competencies , 2016, Koli Calling.

[12]  Anthony V. Robins,et al.  Learning edge momentum: a new account of outcomes in CS1 , 2010, Comput. Sci. Educ..

[13]  Paul Ralph,et al.  Objects Count so Count Objects! , 2018, ICER.

[14]  Cynthia Taylor,et al.  Computer science concept inventories: past and future , 2014, Comput. Sci. Educ..

[15]  Daryl J. D'Souza,et al.  Benchmarking Introductory Programming Exams: How and Why , 2016, ITiCSE.

[16]  Allison Elliott Tew,et al.  The Case for Validated Tools in Computer Science Education Research , 2013, Computer.

[17]  Benedict duBoulay,et al.  Some Difficulties of Learning to Program , 1986 .

[18]  Deborah J. Armstrong The quarks of object-oriented development , 2006, CACM.

[19]  Jens Bennedsen,et al.  A Competence Model for Object-Interaction in Introductory Programming , 2006, PPIG.

[20]  Kirsti Ala-Mutka,et al.  A study of the difficulties of novice programmers , 2005, ITiCSE '05.

[21]  Andrew Luxton-Reilly,et al.  Mastery Learning in Computer Science Education , 2019, ACE '19.

[22]  Kristy Elizabeth Boyer,et al.  A Practical Guide to Developing and Validating Computer Science Knowledge Assessments with Application to Middle School , 2015, SIGCSE.

[23]  Roy D. Pea,et al.  Language-Independent Conceptual “Bugs” in Novice Programming , 1986 .

[24]  Richard C. H. Connor,et al.  Code or (not code): separating formal and natural language in CS education , 2014, WiPSCE.

[25]  Sven Apel,et al.  Measuring and modeling programming experience , 2013, Empirical Software Engineering.

[26]  Leon E. Winslow,et al.  Programming pedagogy—a psychological overview , 1996, SGCS.

[27]  Jens Bennedsen,et al.  Object Interaction Competence Model v. 2.0 , 2013, 2013 Learning and Teaching in Computing and Engineering.

[28]  Juha Sorva,et al.  Notional machines and introductory programming education , 2013, TOCE.

[29]  Peter Hubwieser,et al.  A Competency Structure Model of Object-Oriented Programming , 2016, 2016 International Conference on Learning and Teaching in Computing and Engineering (LaTICE).

[30]  E. B. Andersen,et al.  A goodness of fit test for the rasch model , 1973 .

[31]  L. Thomas,et al.  A cognitive approach to identifying measurable milestones for programming skill acquisition , 2006, ITiCSE-WGR '06.

[32]  Peter Hubwieser,et al.  Dimensions of Programming Knowledge , 2015, ISSEP.

[33]  Leela Waheed,et al.  Measuring Student Competency in University Introductory Computer Programming: Epistemological and Methodological Foundations , 2018 .

[34]  Mordechai Ben-Ari,et al.  A long-term investigation of the comprehension of OOP concepts by novices , 2005, Comput. Sci. Educ..

[35]  Anthony Robins,et al.  Novice Programmers and Introductory Programming , 2019, The Cambridge Handbook of Computing Education Research.

[36]  Jürgen Börstler Using Role-Play Diagrams to Improve Scenario Role-Play , 2010, Graph Transformations and Model-Driven Engineering.

[37]  Reinhold Hatzinger,et al.  Nonparametric tests for the Rasch model : explanation , development , and application of quasi-exact tests for small samples , 2013 .

[38]  Stephen H. Edwards,et al.  The Canterbury QuestionBank: building a repository of multiple-choice CS1 and CS2 questions , 2013, ITiCSE -WGR '13.

[39]  Shuchi Grover,et al.  Measuring Student Learning in Introductory Block-Based Programming: Examining Misconceptions of Loops, Variables, and Boolean Logic , 2017, SIGCSE.

[40]  Georg Rasch,et al.  Probabilistic Models for Some Intelligence and Attainment Tests , 1981, The SAGE Encyclopedia of Research Design.

[41]  Jens Bennedsen,et al.  BlueJ Visual Debugger for Learning the Execution of Object-Oriented Programs? , 2010, TOCE.

[42]  Carsten Schulte,et al.  What do teachers teach in introductory programming? , 2006, ICER '06.

[43]  Raina Mason,et al.  Why the bottom 10% just can't do it: mental effort measures and implication for introductory programming courses , 2012, ACE 2012.

[44]  Leela Waheed,et al.  Development and Application of a Rasch Model Measure of Student Competency in University Introductory Computer Programming , 2018 .

[45]  Raymond Lister,et al.  Not seeing the forest for the trees: novice programmers and the SOLO taxonomy , 2006, ITICSE '06.

[46]  Cathy A. Enz,et al.  Scale Construction: Developing Reliable and Valid Measurement Instruments , 1997 .

[47]  Gregorio Robles,et al.  Development of Computational Thinking Skills through Unplugged Activities in Primary School , 2017, WiPSCE.

[48]  Mark Guzdial,et al.  The FCS1: a language independent assessment of CS1 knowledge , 2011, SIGCSE.

[49]  Roy D. Pea,et al.  The Buggy Path to The Development of Programming Expertise , 1987 .

[50]  Eckhard Klieme,et al.  The concept of competence in educational contexts. , 2008 .

[51]  Yingjun Cao,et al.  Developing Assessments to Determine Mastery of Programming Fundamentals , 2017, ITiCSE.

[52]  Frederick W. B. Li,et al.  Failure rates in introductory programming revisited , 2014, ITiCSE '14.