The Knowledge Component (KC) picture of learning has proven useful for constructing models of student learning in a number of subject areas. However, it is still unclear how well this picture generalizes to other contexts and subject areas. A corpus of 62,000 exercises for 10 textbooks on the Mastering platform has been tagged by content experts. In this report, I introduce a strategy for investigating the importance of a given set of KCs in describing student performance as the students solve problems. The strategy is to see how much of the student’s performance on an exercise is explained by the associated KC and how much it is predicted by a problemspecific difficulty parameter. To do this, I introduce a model that is a combination of the Rasch model and the learning curves from the KC picture. For this corpus and set of KC tags, a rather striking picture emerges: problem difficulty accounts for most of the student behavior while KC learning accounts for only a small portion of the student behavior. I hypothesize that these KC tags do not accurately capture the skills students are using while doing their homework. Author
[1]
Albert T. Corbett,et al.
The Knowledge-Learning-Instruction Framework: Bridging the Science-Practice Chasm to Enhance Robust Student Learning
,
2012,
Cogn. Sci..
[2]
Albert T. Corbett,et al.
Cognitive Tutor: Applied research in mathematics education
,
2007,
Psychonomic bulletin & review.
[3]
Francis W. Sears,et al.
University Physics with Modern Physics.
,
2003
.
[4]
Kurt VanLehn,et al.
The Behavior of Tutoring Systems
,
2006,
Int. J. Artif. Intell. Educ..
[5]
Georg Rasch,et al.
Probabilistic Models for Some Intelligence and Attainment Tests
,
1981,
The SAGE Encyclopedia of Research Design.
[6]
Kurt VanLehn,et al.
Instructional Factors Analysis: A Cognitive Model For Multiple Instructional Interventions
,
2011,
EDM.