Inferring appropriate feedback timing from answering styles for open-ended questions
暂无分享,去创建一个
Among the possible forms of test, open-ended question has a non-replaceable role in assessing high-level thinking in science education. However, teachers usually hesitate to give this type of tests because of the grading efforts involved unless the grading can be processed automatically by the computer. Recently, there has been much research on automatic text processing in the area of language learning and writing [1] and science education [2]. The system in [2] was further customized into an intelligent tutoring system, called VIBRANT, that can provide feedback to the students according to the answers that have been inputted so far [3]. In this system, the students were asked to answer an open-ended question in the form of ideation and explanation. When an idea or explanation is entered, appropriate comments or feedbacks are provided automatically according to the established user model. However, this system requires a model of domain expert to compute the score for the students. In addition, only the overall collection of answers counts without regard to the course of how they were inputted. In this paper, we proposed a system aiming to infer good timings for system feedbacks from the answering style of a student. Unlike the previous work on automatic grading for open-ended questions [2], no explicit expert model is constructed. The expected answers are free texts instead of in the semi-structured format of ideation and explanation. For this type of open-ended questions, it is a challenge to know the knowledge status of the student for the purpose of providing timely feedbacks. First, like in [4], we use a regression method in machine learning to train the model for automatic scoring in an off-line step. The learned grading model can then be used to provide automatic grading on-line. In addition, we hope to understand their status from the traits of how they enter the answer as well as the scores they have obtained. For example, some students tend to think of the questions thoroughly before inputting answers while other may quickly write down what occurs to their minds and then revise it later. Can these behaviors be categorized and even quantitatively measured? Are these behaviors related to the scores that they have accumulated over time? We will report some preliminary observations from some experiments in this paper.
[1] Chun-Yen Chang,et al. Assessing Creative Problem-solving with Automated Text Grading , 2008 .
[2] Jill Burstein,et al. Automated Essay Scoring : A Cross-disciplinary Perspective , 2003 .
[3] Chun-Yen Chang,et al. A User Modeling Framework for Exploring Creative Problem-Solving Ability , 2005, AIED.
[4] Carolyn Penstein Rosé,et al. VIBRANT: A Brainstorming Agent for Computer Supported Creative Problem Solving , 2006, Intelligent Tutoring Systems.