Using automated grading tools to provide feedback to students is common in Computer Science education. The first step of automated grading is to find defects in the student program. However, finding bugs in code has never been easy. Comparing computation results using a fixed set of test cases is still the most common way to determine correctness among current automated grading tools. It takes time and effort to design a good set of test cases that can test the student code thoroughly. In practice, tests used for grading are often insufficient for accurate diagnosis. In this paper, we present our utilization of industrial automated testing on student assignments in an introductory programming course. We implemented a framework to collect student codes and apply industrial automated testing to their codes. Then we interpreted the results obtained from testing in a way that students can understand easily. We deployed our framework on five different introductory C programming assignments here at the University of Illinois at Urbana-Champaign.The results show that the automated feedback generation framework can discover more errors inside student submissions and can provide timely and useful feedback to both instructors and students. A total of 142 missed bugs were found within 446 submissions. More than 50% of students received their feedback within 3 minutes of submission.
[1]
Dawson R. Engler,et al.
KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs
,
2008,
OSDI.
[2]
Patrice Godefroid,et al.
SAGE: Whitebox Fuzzing for Security Testing
,
2012,
ACM Queue.
[3]
Mark C. Paulk,et al.
The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data
,
2009,
IEEE Transactions on Software Engineering.
[4]
Koushik Sen,et al.
CUTE and jCUTE: Concolic Unit Testing and Explicit Path Model-Checking Tools
,
2006,
CAV.
[5]
Jianxiong Gao.
Auto grading tool for introductory programming courses
,
2015
.
[6]
I. Huet,et al.
New challenges in teaching introductory programming courses: a case study
,
2004,
34th Annual Frontiers in Education, 2004. FIE 2004..
[7]
Aharon Yadin,et al.
Reducing the dropout rate in an introductory programming course
,
2011,
INROADS.
[8]
Koushik Sen,et al.
Concolic testing
,
2007,
ASE.