Enhancing Online Problems Through Instructor-Centered Tools for Randomized Experiments

Digital educational resources could enable the use of randomized experiments to answer pedagogical questions that instructors care about, taking academic research out of the laboratory and into the classroom. We take an instructor-centered approach to designing tools for experimentation that lower the barriers for instructors to conduct experiments. We explore this approach through DynamicProblem, a proof-of-concept system for experimentation on components of digital problems, which provides interfaces for authoring of experiments on explanations, hints, feedback messages, and learning tips. To rapidly turn data from experiments into practical improvements, the system uses an interpretable machine learning algorithm to analyze students' ratings of which conditions are helpful, and present conditions to future students in proportion to the evidence they are higher rated. We evaluated the system by collaboratively deploying experiments in the courses of three mathematics instructors. They reported benefits in reflecting on their pedagogy, and having a new method for improving online problems for future students.

[1]  Neil T. Heffernan,et al.  Addressing the assessment challenge with an online system that tutors as it assesses , 2009, User Modeling and User-Adapted Interaction.

[2]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[3]  T. Anderson,et al.  Design-Based Research , 2012 .

[4]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[5]  Neil T. Heffernan,et al.  Scaffolding vs. Hints in the Assistment System , 2006, Intelligent Tutoring Systems.

[6]  Lihong Li,et al.  An Empirical Evaluation of Thompson Sampling , 2011, NIPS.

[7]  Christopher Lynnly Hovey,et al.  What Influences CS Faculty to Adopt Teaching Practices? , 2015, SIGCSE.

[8]  Kenneth R. Koedinger,et al.  Interface Design Optimization as a Multi-Armed Bandit Problem , 2016, CHI.

[9]  Ron Kohavi,et al.  Responsible editor: R. Bayardo. , 2022 .

[10]  Jörg Wittwer,et al.  Why Instructional Explanations Often Do Not Work: A Framework for Understanding the Effectiveness of Instructional Explanations , 2008 .

[11]  R. Hastie,et al.  Hindsight: Biased judgments of past events after the outcomes are known. , 1990 .

[12]  V. Shute Focus on Formative Feedback , 2008 .

[13]  K. Squire,et al.  Design-Based Research: Putting a Stake in the Ground , 2004 .

[14]  Marsha C. Lovett,et al.  The Open Learning Initiative: Measuring the Effectiveness of the OLI Statistics Course in Accelerating Student Learning. , 2008 .

[15]  Gillian R. Hayes The relationship of action research to human-computer interaction , 2011, TCHI.

[16]  Cristina Conati,et al.  Providing Adaptive Support in an Interactive Simulation for Learning: An Experimental Evaluation , 2015, CHI.

[17]  John Langford,et al.  The Epoch-Greedy Algorithm for Multi-armed Bandits with Side Information , 2007, NIPS.

[18]  Neil T. Heffernan,et al.  The assessment of learning infrastructure (ALI): the theory, practice, and scalability of automated assessment , 2016, LAK.

[19]  Bradley P. Carlin,et al.  Bayesian Adaptive Methods for Clinical Trials , 2010 .

[20]  A. Renkl,et al.  Do learning protocols support learning strategies and outcomes? The role of cognitive and metacognitive prompts , 2007 .

[21]  Steven L. Scott,et al.  A modern Bayesian look at the multi-armed bandit , 2010 .

[22]  J. Nathan Matias,et al.  Going Dark: Social Factors in Collective Action Against Platform Operators in the Reddit Blackout , 2016, CHI.

[23]  James E. McLean,et al.  Improving Education Through Action Research: A Guide for Administrators and Teachers , 1995 .

[24]  Hsiu-Ping Yueh,et al.  Developing Digital Courseware for a Virtual Nano-Biotechnology Laboratory: A Design-based Research Approach , 2014, J. Educ. Technol. Soc..

[25]  Abbie Brown,et al.  Design experiments: Theoretical and methodological challenges in creating complex interventions in c , 1992 .

[26]  Neil T. Heffernan,et al.  The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching , 2014, International Journal of Artificial Intelligence in Education.

[27]  Leonid Rozenblit,et al.  The misunderstood limits of folk science: an illusion of explanatory depth , 2002, Cogn. Sci..

[28]  Alexander Renkl,et al.  Learning from Worked-Out-Examples: A Study on Individual Differences , 1997, Cogn. Sci..

[29]  Shipra Agrawal,et al.  Further Optimal Regret Bounds for Thompson Sampling , 2012, AISTATS.

[30]  Wei Chu,et al.  A contextual-bandit approach to personalized news article recommendation , 2010, WWW '10.

[31]  Scott R. Klemmer,et al.  Learning innovation at scale , 2014, CHI Extended Abstracts.

[32]  Neil T. Heffernan,et al.  AXIS: Generating Explanations at Scale with Learnersourcing and Machine Learning , 2016, L@S.

[33]  Caitlin C. Farrell,et al.  Conceptualizing Research–Practice Partnerships as Joint Work at Boundaries , 2015 .

[34]  Kenneth R. Koedinger,et al.  Learning is Not a Spectator Sport: Doing is Better than Watching for Learning from a MOOC , 2015, L@S.

[35]  Jonathan T. Morgan,et al.  Democratizing Data Science: The Community Data Science Workshops and Classes , 2017 .

[36]  Louis M. Gomez,et al.  Getting Ideas into Action: Building Networked Improvement Communities in Education , 2010 .

[37]  J. Langford,et al.  The Epoch-Greedy algorithm for contextual multi-armed bandits , 2007, NIPS 2007.