Integrating Knowledge Tracing and Item Response Theory: A Tale of Two Frameworks

Traditionally, the assessment and learning science commu-nities rely on different paradigms to model student performance. The assessment community uses Item Response Theory which allows modeling different student abilities and problem difficulties, while the learning science community uses Knowledge Tracing, which captures skill acquisition. These two paradigms are complementary - IRT cannot be used to model student learning, while Knowledge Tracing assumes all students and problems are the same. Recently, two highly related models based on a principled synthesis of IRT and Knowledge Tracing were introduced. However, these two models were evaluated on different data sets, using different evaluation metrics and with different ways of splitting the data into training and testing sets. In this paper we reconcile the models' results by presenting a unified view of the two models, and by evaluating the models under a common evaluation metric. We find that both models are equivalent and only differ in their training procedure. Our results show that the combined IRT and Knowledge Tracing models offer the best of assessment and learning sciences - high prediction accuracy like the IRT model, and the ability to model student learning like Knowledge Tracing.

[1]  Zachary A. Pardos,et al.  KT-IDEM: introducing item difficulty to the knowledge tracing model , 2011, UMAP'11.

[2]  Jonathan P. Rowe,et al.  Improving Models of Slipping, Guessing, and Moment-By-Moment Learning with Estimates of Skill Difficulty , 2011, EDM.

[3]  Kenneth R. Koedinger,et al.  Individualized Bayesian Knowledge Tracing Models , 2013, AIED.

[4]  Peter Brusilovsky,et al.  General Features in Knowledge Tracing: Applications to Multiple Subskills, Temporal Item Response Theory, and Expert Knowledge , 2014 .

[5]  Kenneth R. Koedinger,et al.  A Data Repository for the EDM Community: The PSLC DataShop , 2010 .

[6]  Peter Brusilovsky,et al.  Adaptive Navigation Support for Parameterized Questions in Object-Oriented Programming , 2009, EC-TEL.

[7]  Zachary A. Pardos,et al.  Modeling Individualization in a Bayesian Networks Implementation of Knowledge Tracing , 2010, UMAP.

[8]  John R. Anderson,et al.  Knowledge tracing: Modeling the acquisition of procedural knowledge , 2005, User Modeling and User-Adapted Interaction.

[9]  Georg Rasch,et al.  Probabilistic Models for Some Intelligence and Attainment Tests , 1981, The SAGE Encyclopedia of Research Design.

[10]  Peter Brusilovsky,et al.  General Features in Knowledge Tracing to Model Multiple Subskills, Temporal Item Response Theory, and Expert Knowledge , 2014, EDM.

[11]  Emma Brunskill,et al.  The Impact on Individualizing Student Models on Necessary Practice Opportunities , 2012, EDM.

[12]  John DeNero,et al.  Painless Unsupervised Learning with Features , 2010, NAACL.

[13]  Sarah E. Schultz,et al.  Revisiting and Extending the Item Difficulty Effect Model , 2013, AIED Workshops.

[14]  Michael C. Mozer,et al.  Integrating latent-factor and knowledge-tracing models to predict individual differences in learning , 2014, EDM.