Toward Near Zero-Parameter Prediction Using a Computational Model of Student Learning

Computational models of learning can be powerful tools to test educational technologies, automate the authoring of instructional software, and advance theories of learning. These mechanistic models of learning, which instantiate computational theories of the learning process, are capable of making predictions about learners’ performance in instructional technologies given only the technology itself without fitting any parameters to existing learners’ data. While these so call “zero-parameter” models have been successful in modeling student learning in intelligent tutoring systems they still show systematic deviation from human learning performance. One deviation stems from the computational models’ lack of prior knowledge—all models start off as a blank slate—leading to substantial differences in performance at the first practice opportunity. In this paper, we explore three different strategies for accounting for prior knowledge within computational models of learning and the effect of these strategies on the predictive accuracy of these models.

[1]  Christopher J. MacLellan,et al.  Computational Models of Human Learning: Applications for Tutor Development, Behavior Prediction, and Theory Testing , 2017 .

[2]  Kenneth R. Koedinger,et al.  Predicting Students' Performance with SimStudent: Learning Cognitive Skills from Observation , 2007, AIED.

[3]  Ran Liu,et al.  When to Block versus Interleave Practice? Evidence Against Teaching Fraction Addition before Fraction Multiplication , 2016, CogSci.

[4]  Abraham Flexner,et al.  Teaching the Teacher. , 1950, British medical journal.

[5]  K. Koedinger,et al.  Example-Tracing Tutors : A New Paradigm for Intelligent Tutoring Systems , 2008 .

[6]  Kenneth R. Koedinger,et al.  Learning Factors Analysis - A General Method for Cognitive Model Evaluation and Improvement , 2006, Intelligent Tutoring Systems.

[7]  Vincent Aleven,et al.  More Accurate Student Modeling through Contextual Estimation of Slip and Guess Probabilities in Bayesian Knowledge Tracing , 2008, Intelligent Tutoring Systems.

[8]  Vincent Aleven,et al.  The Cognitive Tutor Authoring Tools (CTAT): Preliminary Evaluation of Efficiency Gains , 2006, Intelligent Tutoring Systems.

[9]  C. MacLellan,et al.  TRESTLE: A Model of Concept Formation in Structured Domains , 2016 .

[10]  Kenneth R. Koedinger,et al.  Problem Order Implications for Learning Transfer , 2012, ITS.

[11]  Kenneth R. Koedinger,et al.  A Data Repository for the EDM Community: The PSLC DataShop , 2010 .

[12]  Neil T. Heffernan,et al.  Learning Bayesian Knowledge Tracing Parameters with a Knowledge Heuristic and Empirical Probabilities , 2014, Intelligent Tutoring Systems.

[13]  Kenneth R. Koedinger,et al.  Learning Factors Transfer Analysis: Using Learning Curve Analysis to Automatically Generate Domain Models , 2009, EDM.

[14]  Albert T. Corbett,et al.  The Knowledge-Learning-Instruction Framework: Bridging the Science-Practice Chasm to Enhance Robust Student Learning , 2012, Cogn. Sci..

[15]  Kenneth R. Koedinger,et al.  The Apprentice Learner architecture: Closing the loop between learning theory and educational data , 2016, EDM.

[16]  Kenneth R. Koedinger,et al.  Authoring Tutors with SimStudent: An Evaluation of Efficiency and Model Quality , 2014, Intelligent Tutoring Systems.