Automated Measurement of Competencies and Generation of Feedback in Object-Oriented Programming Courses

To overcome the shortage of computer specialists, there is an increased need for correspondent study and training offers, in particular for learning programming. The automated assessment of solutions to programming tasks could relieve teachers of time-consuming corrections and provide individual feedback even in online courses without any personal teacher. The e-assessment system JACK has been successfully applied for more than 12 years up to now, e.g., in a CS1 lecture. However, there are only few solid research results on competencies and competence models for object-oriented programming (OOP), which could be used as a foundation for high-quality feedback.In a joint research project of research groups at two universities, we aim to empirically define competencies for OOP using a mixed-methods approach. In a first step, we performed a qualitative content analysis of source code (sample solutions and students’ solutions) and as a result identified a set of suitable competency components that forms the core of further investigations. Semi-structured interviews with learners will be used to identify difficulties and misconceptions of the learners and to adapt the set of competency components. Based on that we will use Item Response Theory (IRT) to develop an automatically evaluable test instrument for the implementation of abstract data types. We will further develop empirically founded and competency-based feedback that can be used in e-assessment systems and MOOCs.

[1]  Nathan Rountree,et al.  My program is correct but it doesn’t run: A review of novice programming and a study of an introductory programming paper , 2001 .

[2]  Juha Sorva,et al.  Recognizing Programming Misconceptions - An analysis of the data collected from the UUhistle program simulation tool , 2012 .

[3]  Andrew Luxton-Reilly,et al.  Enhancing syntax error messages appears ineffectual , 2014, ITiCSE '14.

[4]  Shauna J. Sweet,et al.  Analysis of Multivariate Social Science Data (2nd ed.) , 2011 .

[5]  Peter Hubwieser,et al.  Dimensions of Programming Knowledge , 2015, ISSEP.

[6]  Juha Sorva,et al.  Exploring programming misconceptions: an analysis of student mistakes in visual program simulation exercises , 2012, Koli Calling.

[7]  Dalton Serey Guerrero,et al.  Can computers compare student code solutions as well as teachers? , 2014, SIGCSE.

[8]  Mordechai Ben-Ari,et al.  A long-term investigation of the comprehension of OOP concepts by novices , 2005, Comput. Sci. Educ..

[9]  K. Petrides Introduction to Psychometric Theory , 2011 .

[10]  Peter Hubwieser,et al.  Qualitative Content Analysis of Programming Errors , 2017, ICIET '17.

[11]  Rodolfo Azevedo,et al.  Identifying and Validating Java Misconceptions Toward a CS1 Concept Inventory , 2019, ITiCSE.

[12]  Peter Hubwieser,et al.  Strictly Objects First: A Multipurpose Course on Computational Thinking , 2018 .

[13]  Roland Mittermeir,et al.  Computer science/informatics in secondary education , 2011, ITiCSE-WGR.

[14]  Reinhold Hatzinger,et al.  Nonparametric tests for the Rasch model : explanation , development , and application of quasi-exact tests for small samples , 2013 .

[15]  Peter Hubwieser,et al.  Towards competency based testing and feedback: Competency definition and measurement in the field of algorithms & data structures , 2017, 2017 IEEE Global Engineering Education Conference (EDUCON).

[16]  P. Mayring Qualitative content analysis: theoretical foundation, basic procedures and software solution , 2014 .

[17]  Kirsti Ala-Mutka,et al.  Journal of Information Technology Education Supporting Students in C++ Programming Courses with Automatic Program Style Assessment Supporting Students on C++ Programming Courses , 2022 .

[18]  Johan Jeuring,et al.  Towards a Systematic Review of Automated Feedback Generation for Programming Exercises , 2016, ITiCSE.

[19]  Jens Bennedsen,et al.  Object Interaction Competence Model v. 2.0 , 2013, 2013 Learning and Teaching in Computing and Engineering.

[20]  Peter Hubwieser,et al.  Computational thinking as springboard for learning object-oriented programming in an interactive MOOC , 2017, 2017 IEEE Global Engineering Education Conference (EDUCON).

[21]  Eckhard Klieme,et al.  Assessment of Competencies in Educational Contexts , 2008 .

[22]  Jan Vahrenhold,et al.  Developing and validating test items for first-year computer science courses , 2014, Comput. Sci. Educ..

[23]  Johannes Magenheim,et al.  Defining Proficiency Levels of High School Students in Computer Science by an Empirical Task Analysis Results of the MoKoM Project , 2015, ISSEP.

[24]  Axel Böttcher,et al.  Development of a Classification Scheme for Errors Observed in the Process of Computer Programming Education , 2015 .

[25]  Peter Hubwieser,et al.  A Competency Structure Model of Object-Oriented Programming , 2016, 2016 International Conference on Learning and Teaching in Computing and Engineering (LaTICE).

[26]  Ellen Francine Barbosa,et al.  A Systematic Literature Review of Assessment Tools for Programming Assignments , 2016, 2016 IEEE 29th International Conference on Software Engineering Education and Training (CSEET).

[27]  Anthony Robins,et al.  Problem distributions in a CS1 course , 2006 .

[28]  Shahram Azizighanbari,et al.  Modellierung, Vermittlung und Diagnostik der Kompetenz kompetenzorientiert zu unterrichten. Wissenschaftliche Herausforderung und ein praktischer Lösungsversuch , 2009 .

[29]  Johannes Magenheim,et al.  Measuring Student Competences in German Upper Secondary Computer Science Education , 2014, ISSEP.

[30]  Petri Ihantola,et al.  Do we know how difficult the rainfall problem is? , 2015, Koli Calling.

[31]  Rodolfo Azevedo,et al.  Developing a Computer Science Concept Inventory for Introductory Programming , 2016, SIGCSE.

[32]  Alexander Repenning,et al.  The zones of proximal flow: guiding students through a space of computational thinking skills and challenges , 2013, ICER.

[33]  Johannes Magenheim,et al.  Competences of Undergraduate Computer Science Students , 2015 .

[34]  Michael Goedicke,et al.  10 Jahre automatische Bewertung von Programmieraufgaben mit JACK - Rückblick und Ausblick , 2017, GI-Jahrestagung.

[35]  Janet Rountree,et al.  Learning and Teaching Programming: A Review and Discussion , 2003, Comput. Sci. Educ..

[36]  Nickolas J. G. Falkner,et al.  Increasing the effectiveness of automated assessment by increasing marking granularity and feedback units , 2014, SIGCSE.

[37]  Michail N. Giannakos,et al.  A Global Snapshot of Computer Science Education in K-12 Schools , 2015, ITiCSE-WGR.

[38]  Franz Emanuel Weinert,et al.  Leistungsmessungen in Schulen , 2001 .

[39]  Ralph Johnson,et al.  design patterns elements of reusable object oriented software , 2019 .

[40]  Benjamin S. Bloom,et al.  A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives , 2000 .

[41]  Dalton Serey Guerrero,et al.  Qualitative aspects of students' programs: Can we make them measurable? , 2016, 2016 IEEE Frontiers in Education Conference (FIE).