Ensembling predictions of student knowledge within intelligent tutoring systems

Over the last decades, there have been a rich variety of approaches towards modeling student knowledge and skill within interactive learning environments. There have recently been several empirical comparisons as to which types of student models are better at predicting future performance, both within and outside of the interactive learning environment. However, these comparisons have produced contradictory results. Within this paper, we examine whether ensemble methods, which integrate multiple models, can produce prediction results comparable to or better than the best of nine student modeling frameworks, taken individually. We ensemble model predictions within a Cognitive Tutor for Genetics, at the level of predicting knowledge action-by-action within the tutor. We evaluate the predictions in terms of future performance within the tutor and on a paper post-test. Within this data set, we do not find evidence that ensembles of models are significantly better. Ensembles of models perform comparably to or slightly better than the best individual models, at predicting future performance within the tutor software. However, the ensembles of models perform marginally significantly worse than the best individual models, at predicting post-test performance.

[1]  Albert T. Corbett,et al.  A Bayes Net Toolkit for Student Modeling in Intelligent Tutoring Systems , 2006, Intelligent Tutoring Systems.

[2]  Zachary A. Pardos,et al.  Navigating the parameter space of Bayesian Knowledge Tracing models: Visualizations of the convergence of the Expectation Maximization algorithm , 2010, EDM.

[3]  Francesco Ricci,et al.  User Modeling, Adaptation, and Personalization , 2013, Lecture Notes in Computer Science.

[4]  Alfred Kobsa,et al.  The Adaptive Web, Methods and Strategies of Web Personalization , 2007, The Adaptive Web.

[5]  Albert T. Corbett,et al.  A Cognitive Tutor for Genetics Problem Solving: Learning Gains and Student Modeling , 2010 .

[6]  Shou-De Lin,et al.  Feature Engineering and Classifier Ensemble for KDD Cup 2010 , 2010, KDD 2010.

[7]  Neil T. Heffernan,et al.  Comparing Knowledge Tracing and Performance Factor Analysis by Using Multiple Model Fitting Procedures , 2010, Intelligent Tutoring Systems.

[8]  Kenneth R. Koedinger,et al.  Performance Factors Analysis - A New Alternative to Knowledge Tracing , 2009, AIED.

[9]  Jim Reye,et al.  Student Modelling Based on Belief Networks , 2004, Int. J. Artif. Intell. Educ..

[10]  Ryan Shaun Joazeiro de Baker,et al.  Contextual Slip and Prediction of Student Performance after Use of an Intelligent Tutor , 2010, UMAP.

[11]  Kenneth R. Koedinger,et al.  Learning Factors Transfer Analysis: Using Learning Curve Analysis to Automatically Generate Domain Models , 2009, EDM.

[12]  Yue Gong,et al.  Using Dirichlet priors to improve model parameter plausibility , 2009, EDM.

[13]  Zachary A. Pardos,et al.  Modeling Individualization in a Bayesian Networks Implementation of Knowledge Tracing , 2010, UMAP.

[14]  John R. Anderson,et al.  Knowledge tracing: Modeling the acquisition of procedural knowledge , 2005, User Modeling and User-Adapted Interaction.

[15]  Rich Caruana,et al.  Ensemble selection from libraries of models , 2004, ICML.

[16]  Vincent Aleven,et al.  More Accurate Student Modeling through Contextual Estimation of Slip and Guess Probabilities in Bayesian Knowledge Tracing , 2008, Intelligent Tutoring Systems.

[17]  N. Heffernan,et al.  Using HMMs and bagged decision trees to leverage rich features of user and skill from an intelligent tutoring system dataset , 2010 .

[18]  R. Sawyer The Cambridge Handbook of the Learning Sciences: Introduction , 2014 .

[19]  Peter Brusilovsky,et al.  User Models for Adaptive Hypermedia and Adaptive Educational Systems , 2007, The Adaptive Web.