A Knowledge Modeling Approach to Evaluating Student Essays in Engineering Courses

Automatically grading essay questions can have advantages for instructors in higher education. Understanding and specifying how grading is done manually, so that there is potential to do it automatically, is a labor-intensive effort in knowledge elicitation, acquisition, and representation. This paper describes how an interdisciplinary team used conceptual graphs to formally specify the model for a good essay response, and then how that expert model was used as the standard by which the student responses were judged. The methodology is then described for creating the expert model for student responses. These were compared using two different approaches. It was found that most students included the most important concepts, but those student answers that were more complete (i.e., also including concepts of lesser importance) received higher grades. The approaches are then evaluated in terms of reliability and validity, and finally, suggestions are made for future work.