Giving hints is complicated: understanding the challenges of an automated hint system based on frequent wrong answers

Formative feedback is important for learning. Code-tracing is a vital skill in computer science learning. We set out to deliver formative feedback to students on code-tracing, constructed-response assessments by building a student error model using insights gained from inspecting the assessment's frequent wrong answers. Moreover, we compared two different kinds of hints: reteaching and knowledge integration. We found wrong answer co-occurrence provides useful information for our model. However, we were unable to find evidence in our intervention experiment that our hints improved student outcomes on post-test questions. Therefore, we also report here our results on a retrospective, exploratory analysis to understand potential reasons why our results are null.

[1]  Armando Fox,et al.  Taking Advantage of Scale by Analyzing Frequent Constructed-Response, Code Tracing Wrong Answers , 2017, ICER.

[2]  Jiawei Han,et al.  Re-examination of interestingness measures in pattern mining: a unified framework , 2010, Data Mining and Knowledge Discovery.

[3]  Raymond J. Mooney,et al.  Refinement-based student modeling and automated bug library construction , 1996 .

[4]  Raymond Lister,et al.  Relationships between reading, tracing and writing skills in introductory programming , 2008, ICER '08.

[5]  Kristin Stephens-Martinez,et al.  Serving CS Formative Feedback on Assessments Using Simple and Practical Teacher-Bootstrapped Error Models , 2017 .

[6]  E. Mory Feedback research revisited. , 2004 .

[7]  Anne Venables,et al.  A closer look at tracing, explaining and code writing skills in the novice programmer , 2009, ICER '09.

[8]  Colin J. Fidge,et al.  Further evidence of a relationship between explaining, tracing and writing skills in introductory programming , 2009, ITiCSE.

[9]  Larry Ambrose,et al.  The power of feedback. , 2002, Healthcare executive.

[10]  John DeNero,et al.  Problems Before Solutions: Automated Problem Clarification at Scale , 2015, L@S.

[11]  Marlene Jones,et al.  The State of Student Modelling , 1994 .

[12]  Anna N. Rafferty,et al.  Automated guidance for student inquiry. , 2016 .

[13]  Libby Gerard,et al.  Designing Automated Guidance to Promote Productive Revision of Science Explanations , 2017, International Journal of Artificial Intelligence in Education.

[14]  Neil T. Heffernan,et al.  Extending Knowledge Tracing to Allow Partial Credit: Using Continuous versus Binary Nodes , 2013, AIED.

[15]  Andrew F. Heckler,et al.  Factors Affecting Learning of Vector Math from Computer-Based Practice: Feedback Complexity and Prior Knowledge. , 2016 .

[16]  Raymond Lister,et al.  On the Number of Attempts Students Made on Some Online Programming Exercises During Semester and their Subsequent Performance on Final Exam Questions , 2016, ITiCSE.

[17]  V. Shute Focus on Formative Feedback , 2008 .

[18]  Yigal Attali,et al.  Immediate Feedback and Opportunity to Revise Answers , 2011 .

[19]  Amruth N. Kumar,et al.  Explanation of step-by-step execution as feedback for problems on program analysis , and its generation in model-based problem-solving tutors , 2006 .

[20]  Marcia C. Linn,et al.  Distinguishing complex ideas about climate change: knowledge integration vs. specific guidance , 2016 .

[21]  Donald E. Powers,et al.  Immediate Feedback and Opportunity to Revise Answers to Open-Ended Questions , 2010 .

[22]  Niels Pinkwart,et al.  A Review of AI-Supported Tutoring Approaches for Learning Programming , 2013, Advanced Computational Methods for Knowledge Engineering.