Preventing Cheating in Hands-on Lab Assignments

Networking, operating systems, and cybersecurity skills are exercised best in an authentic environment. Students work with real systems and tools in a lab environment and complete assigned tasks. Since all students typically receive the same assignment, they can consult their approach and progress with an instructor, a tutoring system, or their peers. They may also search for information on the Internet. Having the same assignment for all students in class is standard practice efficient for learning and developing skills. However, it is prone to cheating when used in a summative assessment such as graded homework, a mid-term test, or a final exam. Students can easily share and submit correct answers without completing the assignment. In this paper, we discuss methods for automatic problem generation for hands-on tasks completed in a computer lab environment. Using this approach, each student receives personalized tasks. We developed software for generating and submitting these personalized tasks and conducted a case study. The software was used for creating and grading a homework assignment in an introductory security course enrolled by 207 students. The software revealed seven cases of suspicious submissions, which may constitute cheating. In addition, students and instructors welcomed the personalized assignments. Instructors commented that this approach scales well for large classes. Students rarely encountered issues while running their personalized lab environment. Finally, we have released the open-source software to enable other educators to use it in their courses and learning environments.

[1]  Jan Vykopal,et al.  Toolset for Collecting Shell Commands and Its Application in Hands-on Cybersecurity Training , 2021, 2021 IEEE Frontiers in Education Conference (FIE).

[2]  Thomas Shaw,et al.  Security Scenario Generator (SecGen): A Framework for Generating Randomly Vulnerable Rich-scenario VMs for Learning Computer Security and Hosting CTF Events , 2017, ASE @ USENIX Security Symposium.

[3]  Davide Fossati,et al.  Unlimited Trace Tutor: Learning Code Tracing With Automatically Generated Programs , 2020, SIGCSE.

[4]  David Brumley,et al.  Automatic Problem Generation for Capture-the-Flag Competitions , 2015 .

[5]  Jan Vykopal,et al.  Benefits and Pitfalls of Using Capture the Flag Games in University Courses , 2020, SIGCSE.

[6]  Craig Zilles,et al.  Superficial Code-guise: Investigating the Impact of Surface Feature Changes on Students' Programming Question Scores , 2021, SIGCSE.

[7]  Kevin Chung Live Lesson: Lowering the Barriers to Capture The Flag Administration and Participation , 2017, ASE @ USENIX Security Symposium.

[8]  Jan Vykopal,et al.  Scalable Learning Environments for Teaching Cybersecurity Hands-on , 2021, 2021 IEEE Frontiers in Education Conference (FIE).

[9]  Anja Feldmann,et al.  NAT Usage in Residential Broadband Networks , 2011, PAM.

[10]  Christoph Meinel,et al.  Online assessment for hands-on cyber security training in a virtual lab , 2012, Proceedings of the 2012 IEEE Global Engineering Education Conference (EDUCON).

[11]  Sanjit A. Seshia,et al.  Automating exercise generation: a step towards meeting the MOOC challenge for embedded systems , 2012, WESE '12.

[12]  Tom Chothia,et al.  An Offline Capture The Flag-Style Virtual Machine and an Assessment of Its Value for Cybersecurity Education , 2015 .

[13]  Ruben Rios,et al.  PERSONALIZED COMPUTER SECURITY TASKS WITH AUTOMATIC EVALUATION AND FEEDBACK , 2019 .

[14]  Wu-chang Feng A Scaffolded, Metamorphic CTF for Reverse Engineering , 2015 .