An Analysis of Programming Course Evaluations Before and After the Introduction of an Autograder

©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Abstract—Commonly, introductory programming courses in higher education institutions have hundreds of participating students eager to learn to program. The manual effort for reviewing the submitted source code and for providing feedback can no longer be managed. Manually reviewing the submitted homework can be subjective and unfair, particularly if many tutors are responsible for grading. Different autograders can help in this situation; however, there is a lack of knowledge about how autograders can impact students’ overall perception of programming classes and teaching. This is relevant for course organizers and institutions to keep their programming courses attractive while coping with increasing students. This paper studies the answers to the standardized university evaluation questionnaires of multiple large-scale foundational computer science courses which recently introduced autograding. The differences before and after this intervention are analyzed. By incorporating additional observations, we hypothesize how the autograder might have contributed to the significant changes in the data, such as, improved interactions between tutors and students, improved overall course quality, improved learning success, increased time spent, and reduced difficulty. This qualitative study aims to provide hypotheses for future research to define and conduct quantitative surveys and data analysis. The autograder technology can be validated as a teaching method to improve student satisfaction with programming courses.

[1]  Stephan Krusche,et al.  Experiences of a Software Engineering Course based on Interactive Learning , 2017, SEUH.

[2]  Michael Striewe,et al.  Computer aided assessments and programming exercises with JACK , 2008 .

[3]  Antti Knutas,et al.  Improving the quality of teaching by utilising written student feedback: A streamlined process , 2020, Comput. Educ..

[4]  Stephen H. Edwards,et al.  Web-CAT: automatically grading programming assignments , 2008, ITiCSE.

[5]  Venky Shankararaman,et al.  Analyzing educational comments for topics and sentiments: A text analytics approach , 2015, 2015 IEEE Frontiers in Education Conference (FIE).

[6]  Ruth B Hoppe,et al.  Piloting Case-based Instruction in a Didactic Clinical Immunology Course , 2005, American Society for Clinical Laboratory Science.

[7]  Stephan Krusche Semi-Automatic Assessment of Modeling Exercises using Supervised Machine Learning , 2022, HICSS.

[8]  Andreas Seitz,et al.  Increasing the Interactivity in Software Engineering MOOCs - A Case Study , 2019, HICSS.

[9]  Quratulain Rajput,et al.  Sentiment analysis of student feedback using machine learning and lexicon based approaches , 2017, 2017 International Conference on Research and Innovation in Information Systems (ICRIIS).

[10]  F. Wilcoxon Individual Comparisons by Ranking Methods , 1945 .

[11]  Wageeh Boles,et al.  Visualizing Student Opinion Through Text Analysis , 2019, IEEE Transactions on Education.

[12]  Johan Jeuring,et al.  A Systematic Literature Review of Automated Feedback Generation for Programming Exercises , 2018, ACM Trans. Comput. Educ..

[13]  Andrew Benjamin Goldman,et al.  Using Daily Missions to Promote Incremental Progress on Programming Assignments , 2019 .

[14]  David Lake,et al.  Student performance and perceptions of a lecture-based course compared with the same course utilizing group discussion. , 2001, Physical therapy.

[15]  Carsten Kleiner,et al.  Evaluation automatisierter Programmbewertung bei der Vermittlung der Sprachen Java und SQL mit den Gradern "aSQLg" und "Graja" aus studentischer Perspektive , 2013, DeLFI.

[16]  Antti Knutas,et al.  An online tool for analyzing written student feedback , 2020, Koli Calling.

[17]  Bernd Brügge,et al.  An Interactive Learning Method to Engage Students in Modeling , 2020, 2020 IEEE/ACM 42nd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET).

[18]  Andreas Zeller,et al.  Web-basierte Programmierpraktika mit Praktomat , 2002, Softwaretechnik-Trends.

[19]  W S Maki,et al.  Evaluation of a Web-based introductory psychology course: I. Learning and satisfaction in on-line versus lecture courses , 2000, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[20]  Gerard Salton,et al.  Term-Weighting Approaches in Automatic Text Retrieval , 1988, Inf. Process. Manag..

[21]  Michael Goedicke,et al.  Evaluation einer Statistiklehrveranstaltung mit dem JACK R-Modul , 2017, DeLFI.

[22]  Hui Zhang,et al.  Experimental explorations on short text topic mining between LDA and NMF based Schemes , 2019, Knowl. Based Syst..

[23]  Stephan Krusche,et al.  A Machine Learning Approach for Suggesting Feedback in Textual Exercises in Large Courses , 2021, L@S.

[24]  Andrzej Cichocki,et al.  Fast Local Algorithms for Large Scale Nonnegative Matrix and Tensor Factorizations , 2009, IEICE Trans. Fundam. Electron. Commun. Comput. Sci..

[25]  Venky Shankararaman,et al.  A conceptual framework for analyzing students' feedback , 2017, 2017 IEEE Frontiers in Education Conference (FIE).

[26]  I-Han Hsiao,et al.  A Subjective Evaluation of Web-based Programming Grading Assistant: Harnessing Digital Footprints from Paper-based Assessments , 2017, MMLA-CrossLAK@LAK.

[27]  Antti Knutas,et al.  Distinguishing the Themes Emerging from Masses of Open Student Feedback , 2019, 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO).

[28]  Andreas Seitz,et al.  Interactive Learning: Increasing Student Participation through Shorter Exercise Cycles , 2017, ACE '17.

[29]  Michael Goedicke,et al.  10 Jahre automatische Bewertung von Programmieraufgaben mit JACK - Rückblick und Ausblick , 2017, GI-Jahrestagung.

[30]  A. Cathcart,et al.  Beyond satisfaction scores: visualising student comments for whole-of-course evaluation , 2020, Assessment & Evaluation in Higher Education.

[31]  Dennis E. Clayson,et al.  Personality and the Student Evaluation of Teaching , 2006 .

[32]  Constance E. McIntosh,et al.  The Use of Topic Modeling With Latent Dirichlet Analysis With Open-Ended Survey Items , 2018, Translational Issues in Psychological Science.

[33]  Raquel Landa,et al.  Relevance of Immediate Feedback in an Introduction to Programming Course , 2019 .

[34]  Andreas Seitz,et al.  ArTEMiS: An Automatic Assessment Management System for Interactive Learning , 2018, SIGCSE.

[35]  Suchithra Rajendran,et al.  Topic-based knowledge mining of online student reviews for strategic planning in universities , 2019, Comput. Ind. Eng..