New Metrics for Automated Programming Assessment

ABSTRACT This paper describes a new automated programming assessment metrics which improves the assessment scheme proposed by Michael Rees. The metrics enhance the efficiency of program assessment and also reduce the assessment's subjective aspect to a minimum. An empirical study was held to study the performance of the assessment method based on six major software metrics. The software metrics include those which measure development effort, program reliability, size, data structure, logic structure, programming style and the execution efficiency of programs. An automated assessment tool called ASSESS was developed to capture data from students' submitted program assignments into a measurement database and perform the required analysis to produce graphical outputs. To increase the descriptive power of some of the metrics, several extensions were introduced to the definitions of some of the metrics. The assessment results show that students' programming skills can be differentiated using the new metrics. Using ASSESS assessment of computer programs can be facilitated.