KT-IDEM: introducing item difficulty to the knowledge tracing model

Many models in computer education and assessment take into account difficulty. However, despite the positive results of models that take difficulty in to account, knowledge tracing is still used in its basic form due to its skill level diagnostic abilities that are very useful to teachers. This leads to the research question we address in this work: Can KT be effectively extended to capture item difficulty and improve prediction accuracy? There have been a variety of extensions to KT in recent years. One such extension was Baker's contextual guess and slip model. While this model has shown positive gains over KT in internal validation testing, it has not performed well relative to KT on unseen in-tutor data or post-test data, however, it has proven a valuable model to use alongside other models. The contextual guess and slip model increases the complexity of KT by adding regression steps and feature generation. The added complexity of feature generation across datasets may have hindered the performance of this model. Therefore, one of the aims of our work here is to make the most minimal of modifications to the KT model in order to add item difficulty and keep the modification limited to changing the topology of the model. We analyze datasets from two intelligent tutoring systems with KT and a model we have called KT-IDEM (Item Difficulty Effect Model) and show that substantial performance gains can be achieved with this minor modification that incorporates item difficulty.

[1]  Albert T. Corbett,et al.  Cognitive Computer Tutors: Solving the Two-Sigma Problem , 2001, User Modeling.

[2]  Zachary A. Pardos,et al.  Modeling Individualization in a Bayesian Networks Implementation of Knowledge Tracing , 2010, UMAP.

[3]  John R. Anderson,et al.  Knowledge tracing: Modeling the acquisition of procedural knowledge , 2005, User Modeling and User-Adapted Interaction.

[4]  R. Sawyer The Cambridge Handbook of the Learning Sciences: Introduction , 2014 .

[5]  Yehuda Koren,et al.  Lessons from the Netflix prize challenge , 2007, SKDD.

[6]  Francesco Ricci,et al.  User Modeling, Adaptation, and Personalization , 2013, Lecture Notes in Computer Science.

[7]  Vincent Aleven,et al.  More Accurate Student Modeling through Contextual Estimation of Slip and Guess Probabilities in Bayesian Knowledge Tracing , 2008, Intelligent Tutoring Systems.

[8]  N. Heffernan,et al.  Using HMMs and bagged decision trees to leverage rich features of user and skill from an intelligent tutoring system dataset , 2010 .

[9]  Zachary A. Pardos,et al.  Learning What Works in ITS from Non-traditional Randomized Controlled Trial Data , 2010, Intelligent Tutoring Systems.

[10]  Beverly Park Woolf,et al.  Estimating Student Proficiency Using an Item Response Theory Model , 2006, Intelligent Tutoring Systems.

[11]  Shou-De Lin,et al.  Feature Engineering and Classifier Ensemble for KDD Cup 2010 , 2010, KDD 2010.

[12]  Ryan Shaun Joazeiro de Baker,et al.  Contextual Slip and Prediction of Student Performance after Use of an Intelligent Tutor , 2010, UMAP.

[13]  Piotr J. Gmytrasiewicz,et al.  User Modeling 2001 , 2001, Lecture Notes in Computer Science.