Towards reliable and valid measurement of individualized student parameters

Research in Educational Data Mining could benefit from greater efforts to ensure that models yield reliable, valid, and interpretable parameter estimates. These efforts have especially been lacking for individualized student-parameter models. We collected two datasets from a sizable student population with excellent “depth” – that is, many observations for each skill for each student. We fit two models, the Individualized-slope Additive Factors Model (iAFM) and Individualized Bayesian Knowledge Tracing (iBKT), both of which individualize for student ability and student learning rate. Estimates of student ability were reliable and valid: they were consistent across both models and across both datasets, and they significantly predicted out-of-tutor pretest data. In one of the datasets, estimates of student learning rate were reliable and valid: consistent across models and significantly predictive of pretest-posttest gains. This is the first demonstration that statistical models of data resulting from students’ use of learning technology can produce reliable and valid estimates of individual student learning rates. Further, we sought to interpret and understand what differentiates a student with a high estimated learning rate from a student with a low one. We found that learning rate is significantly related to estimates of student ability (prior knowledge) and self-reported measures of diligence. Finally, we suggest a variety of possible applications of models with reliable estimates of individualized student parameters, including a more novel, straightforward way of identifying wheel spinning.

[1]  Kenneth R. Koedinger,et al.  Automated Student Model Improvement , 2012, EDM.

[2]  Zachary A. Pardos,et al.  Clustered Knowledge Tracing , 2012, ITS.

[3]  Kenneth R. Koedinger,et al.  Learning Factors Analysis - A General Method for Cognitive Model Evaluation and Improvement , 2006, Intelligent Tutoring Systems.

[4]  Kurt VanLehn,et al.  The Behavior of Tutoring Systems , 2006, Int. J. Artif. Intell. Educ..

[5]  Kenneth R. Koedinger,et al.  Variations in Learning Rate: Student Classification Based on Systematic Residual Error Patterns across Practice Opportunities. , 2015, EDM 2015.

[6]  Galit Shmueli,et al.  To Explain or To Predict? , 2010, 1101.0891.

[7]  Kenneth R. Koedinger,et al.  Performance Factors Analysis - A New Alternative to Knowledge Tracing , 2009, AIED.

[8]  Thomas G. Dietterich Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms , 1998, Neural Computation.

[9]  Emma Brunskill,et al.  The Impact on Individualizing Student Models on Necessary Practice Opportunities , 2012, EDM.

[10]  V. Aleven,et al.  Rapid Authoring of Intelligent Tutors for Real-World and Experimental Use , 2006, Sixth IEEE International Conference on Advanced Learning Technologies (ICALT'06).

[11]  Kenneth R. Koedinger,et al.  Human-Machine Student Model Discovery and Improvement Using DataShop , 2011, AIED.

[12]  Yue Gong,et al.  Towards Detecting Wheel-Spinning: Future Failure in Mastery Learning , 2015, L@S.

[13]  Ran Liu,et al.  Interpreting model discovery and testing generalization to a new dataset , 2014, EDM.

[14]  Kenneth R. Koedinger,et al.  Individualized Bayesian Knowledge Tracing Models , 2013, AIED.

[15]  Zachary A. Pardos,et al.  Modeling Individualization in a Bayesian Networks Implementation of Knowledge Tracing , 2010, UMAP.

[16]  Albert T. Corbett,et al.  The Knowledge-Learning-Instruction Framework: Bridging the Science-Practice Chasm to Enhance Robust Student Learning , 2012, Cogn. Sci..

[17]  John R. Anderson,et al.  Knowledge tracing: Modeling the acquisition of procedural knowledge , 2005, User Modeling and User-Adapted Interaction.