Smart Like a Fox: How Clever Students Trick Dumb Automated Programming Assignment Assessment Systems

This case study reports on two first-semester programming courses with more than 190 students. Both courses made use of automated assessments. We observed how students trick these systems by analysing the version history of suspect submissions. By analysing more than 3300 submissions, we revealed four astonishingly simple tricks (overfitting, evasion) and cheat-patterns (redirection, and injection) that students used to trick automated programming assignment assessment systems (APAAS). Although not the main focus of this study, it discusses and proposes corresponding counter-measures where appropriate. Nevertheless, the primary intent of this paper is to raise problem awareness and to identify and systematise observable problem patterns in a more formal approach. The identified immaturity of existing APAAS solutions might have implications for courses that rely deeply on automation like MOOCs. Therefore, we conclude to look at APAAS solutions much more from a security point of view (code injection). Moreover, we identify the need to evolve existing unit testing frameworks into more evaluation-oriented teaching solutions that provide better trick and cheat detection capabilities and differentiated grading support.

[1]  Ken Thompson,et al.  Programming Techniques: Regular expression search algorithm , 1968, Commun. ACM.

[2]  Justin Zobel,et al.  Efficient plagiarism detection for large code repositories , 2007 .

[3]  E. Rubio-Royo,et al.  FIGHTING PLAGIARISM: METRICS AND METHODS TO MEASURE AND FIND SIMILARITIES AMONG SOURCE CODE OF COMPUTER PROGRAMS IN VPL , 2011 .

[4]  Rohaida Romli,et al.  Test Data Generation Approaches for Structural Testing and Automatic Programming Assessment: A Systematic Literature Review , 2017 .

[5]  Robert Kelly,et al.  Using automatic machine assessment to teach computer programming , 2017, Comput. Sci. Educ..

[6]  Petra Kaufmann,et al.  Experimental And Quasi Experimental Designs For Research , 2016 .

[7]  Alessandro Orso,et al.  AMNESIA: analysis and monitoring for NEutralizing SQL-injection attacks , 2005, ASE.

[8]  Brij Bhooshan Gupta,et al.  Cross-Site Scripting (XSS) attacks and defense mechanisms: classification and state-of-the-art , 2017, Int. J. Syst. Assur. Eng. Manag..

[9]  E. Rubio Royo,et al.  SCALABLE ARCHITECTURE FOR SECURE EXECUTION AND TEST OF STUDENTS' ASSIGNMENTS IN A VIRTUAL PROGRAMMING LAB , 2011 .

[10]  Christoph Meinel,et al.  Towards practical programming exercises and automated assessment in Massive Open Online Courses , 2015, 2015 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE).

[11]  Jay Ligatti,et al.  Defining code-injection attacks , 2012, POPL '12.

[12]  Julio C. Caiza AUTOMATIC GRADING : REVIEW OF TOOLS AND IMPLEMENTATIONS , 2013 .

[13]  Philip S. Yu,et al.  GPLAG: detection of software plagiarism by program dependence graph analysis , 2006, KDD '06.

[14]  Zenón José Hernández Figueroa,et al.  A Virtual Programming Lab for Moodle with automatic assessment and anti-plagiarism features , 2012 .

[15]  Zhendong Su,et al.  The essence of command injection attacks in web applications , 2006, POPL '06.

[16]  John D. Hunter,et al.  Matplotlib: A 2D Graphics Environment , 2007, Computing in Science & Engineering.

[17]  Kirsti Ala-Mutka,et al.  A Survey of Automated Assessment Approaches for Programming Assignments , 2005, Comput. Sci. Educ..

[18]  Hangjung Zo,et al.  Understanding the MOOCs continuance: The role of openness and reputation , 2015, Comput. Educ..

[19]  Christopher Douce,et al.  Automatic test-based assessment of programming: A review , 2005, JERC.

[20]  Petri Ihantola,et al.  Review of recent systems for automatic assessment of programming assignments , 2010, Koli Calling.

[21]  Dominique Thiébaut Automatic evaluation of computer programs using Moodle's virtual programming lab (VPL) plug-in , 2015 .

[22]  Travis E. Oliphant,et al.  Guide to NumPy , 2015 .

[23]  et al.,et al.  Jupyter Notebooks - a publishing format for reproducible computational workflows , 2016, ELPUB.