Content Learning Analysis Using the Moment-by-Moment Learning Detector

In recent years, it has become clear that educational data mining methods can play a positive role in refining the content of intelligent tutoring systems. In particular, efforts to determine which content is more and less effective at promoting learning can help improve tutoring systems by identifying ineffective content and cycling it out of the system. Analysis of the learning value of content can also help teachers and system designers create better content by taking notice of what has and has not worked in the past. Past work has looked solely at student response data in doing this type of analysis; we extend this work by instead utilizing the moment-by-moment learning model, P(J). This model uses parameters learned from Bayesian Knowledge Tracing as well as other features extracted from log data to compute the probability that a student learned a skill at a specific problem step. By averaging P(J) values for a particular item across students, and comparing items using statistical testing with post-hoc controls, we can investigate which items typically produce more and less learning. We use this analysis to evaluate items within twenty problem sets completed by students using the ASSISTments Platform, and show how item learning results can be obtained and interpreted from this analysis.

[1]  Ryan Shaun Joazeiro de Baker,et al.  Automatically Detecting a Student's Preparation for Future Learning: Help Use is Key , 2011, EDM.

[2]  Zachary A. Pardos,et al.  Detecting the Learning Value of Items In a Randomized Problem Set , 2009, AIED.

[3]  John R. Anderson,et al.  Knowledge tracing: Modeling the acquisition of procedural knowledge , 2005, User Modeling and User-Adapted Interaction.

[4]  Vincent Aleven,et al.  More Accurate Student Modeling through Contextual Estimation of Slip and Guess Probabilities in Bayesian Knowledge Tracing , 2008, Intelligent Tutoring Systems.

[5]  Neil T. Heffernan,et al.  Using Learning Decomposition to Analyze Instructional Effectiveness in the ASSISTment System , 2009, AIED.

[6]  Jack Mostow,et al.  How Who Should Practice: Using Learning Decomposition to Evaluate the Efficacy of Different Types of Practice for Different Types of Students , 2008, Intelligent Tutoring Systems.

[7]  Ingo Mierswa,et al.  YALE: rapid prototyping for complex data mining tasks , 2006, KDD '06.

[8]  Ryan Shaun Joazeiro de Baker,et al.  Developing a generalizable detector of when students game the system , 2008, User Modeling and User-Adapted Interaction.

[9]  Tom Murray,et al.  Authoring Intelligent Tutoring Systems: An analysis of the state of the art , 1999 .

[10]  Jonathan P. Rowe,et al.  Improving Models of Slipping, Guessing, and Moment-By-Moment Learning with Estimates of Skill Difficulty , 2011, EDM.

[11]  Neil T. Heffernan,et al.  The ASSISTment Builder: Supporting the Life Cycle of Tutoring System Content Creation , 2009, IEEE Transactions on Learning Technologies.

[12]  Neil T. Heffernan,et al.  Predicting State Test Scores Better with Intelligent Tutoring Systems: Developing Metrics to Measure Assistance Required , 2006, Intelligent Tutoring Systems.

[13]  H. Levene Robust tests for equality of variances , 1961 .

[14]  Zachary A. Pardos,et al.  Learning What Works in ITS from Non-traditional Randomized Controlled Trial Data , 2010, Intelligent Tutoring Systems.

[15]  Neil T. Heffernan,et al.  Detecting Learning Moment-by-Moment , 2011, Int. J. Artif. Intell. Educ..

[16]  Vincent Aleven,et al.  The Cognitive Tutor Authoring Tools (CTAT): Preliminary Evaluation of Efficiency Gains , 2006, Intelligent Tutoring Systems.

[17]  Neil T. Heffernan,et al.  Addressing the assessment challenge with an online system that tutors as it assesses , 2009, User Modeling and User-Adapted Interaction.

[18]  Kurt VanLehn,et al.  Do Micro-Level Tutorial Decisions Matter: Applying Reinforcement Learning to Induce Pedagogical Tutorial Tactics , 2010, Intelligent Tutoring Systems.