Pensieve: Feedback on Coding Process for Novices

In large undergraduate computer science classrooms, student learning on assignments is often gauged only by the work on their final solution, not by their programming process. As a consequence, teachers are unable to give detailed feedback on how students implement programming methodology, and novice students often lack a metacognitive understanding of how they learn. We introduce Pensieve as a drag-and-drop, open-source tool that organizes snapshots of student code as they progress through an assignment. The tool is designed to encourage sit-down conversations between student and teacher about the programming process. The easy visualization of code evolution over time facilitates the discussion of intermediate work and progress towards learning goals, both of which would otherwise be unapparent from a single final submission. This paper discusses the pedagogical foundations and technical details of Pensieve and describes results from a particular 207-student classroom deployment, suggesting that the tool has meaningful impacts on education for both the student and the teacher.

[1]  Petri Ihantola,et al.  Review of recent systems for automatic assessment of programming assignments , 2010, Koli Calling.

[2]  D. Nicol,et al.  Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice , 2006 .

[3]  Joseph Lawrance,et al.  Git on the cloud in the classroom , 2013, SIGCSE '13.

[4]  R. Bennett Formative assessment: a critical review , 2011 .

[5]  Steven P. Reise,et al.  Item Response Theory , 2014 .

[6]  S. Quinton,et al.  Feeding forward: using feedback to promote student reflection and learning – a teaching model , 2010 .

[7]  Benjamin S. Bloom,et al.  A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives , 2000 .

[8]  Nell B. Dale,et al.  Most difficult topics in CS1: results of an online survey of educators , 2006, SGCS.

[9]  Lisa C. Kaczmarczyk,et al.  Subverting the fundamentals sequence: using version control to enhance course management , 2007, SIGCSE '07.

[10]  P. Black,et al.  Developing the theory of formative assessment , 2009 .

[11]  Margaret M. Burnett,et al.  Programming, Problem Solving, and Self-Awareness: Effects of Explicit Guidance , 2016, CHI.

[12]  Nick McKeown,et al.  TMOSS: Using Intermediate Assignment Work to Understand Excessive Collaboration in Large Classes , 2018, SIGCSE.

[13]  Neil Brown,et al.  Sharing and Using Programming Log Data (Abstract Only) , 2017, SIGCSE.

[14]  Carol S. Dweck,et al.  Mindsets and Math/Science Achievement , 2008 .

[15]  Kathi Fisler,et al.  Mind your language: on novices' interactions with error messages , 2011, Onward! 2011.

[16]  Johan Jeuring,et al.  Towards a Systematic Review of Automated Feedback Generation for Programming Exercises , 2016, ITiCSE.

[17]  Errol Thompson,et al.  Bloom's taxonomy for CS assessment , 2008, ACE '08.

[18]  John Homer,et al.  Metacognitive Difficulties Faced by Novice Programmers in Automated Assessment Tools , 2018, ICER.

[19]  Amy J. Ko,et al.  Personifying programming tool feedback improves novice programmers' learning , 2011, ICER.

[20]  D. Carless,et al.  Developing sustainable feedback practices , 2011 .

[21]  Peter Ferguson,et al.  Student perceptions of quality feedback in teacher education , 2011 .

[22]  Paulo Blikstein,et al.  Modeling how students learn to program , 2012, SIGCSE '12.

[23]  Kenneth R. Koedinger,et al.  Data-Driven Hint Generation in Vast Solution Spaces: a Self-Improving Python Programming Tutor , 2015, International Journal of Artificial Intelligence in Education.

[24]  Eleni Stroulia,et al.  Using CVS Historical Information to Understand How Students Develop Software , 2004, MSR.

[25]  Fabienne M. Van der Kleij,et al.  Effects of Feedback in a Computer-Based Learning Environment on Students’ Learning Outcomes , 2013 .

[26]  Jason Nieh,et al.  Teaching operating systems using virtual appliances and distributed version control , 2010, SIGCSE.

[27]  Mark V. Redmond,et al.  Small Group Instructional Diagnosis: Final Report. , 1982 .

[28]  Peter Vamplew,et al.  An Anti-Plagiarism Editor for Software Development Courses , 2005, ACE.

[29]  J. Bransford How people learn , 2000 .

[30]  Fatos Xhafa,et al.  A Review on Massive E-Learning (MOOC) Design, Delivery and Assessment , 2013, 2013 Eighth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing.

[31]  Ann L. Brown,et al.  How people learn: Brain, mind, experience, and school. , 1999 .

[32]  K. Tanner Promoting student metacognition. , 2012 .

[33]  Kirsti Ala-Mutka,et al.  A Survey of Automated Assessment Approaches for Programming Assignments , 2005, Comput. Sci. Educ..

[34]  Quintin I. Cutts,et al.  Manipulating mindset to positively influence introductory programming performance , 2010, SIGCSE.

[35]  Ronan G. Reilly,et al.  Examining the role of self-regulated learning on introductory programming performance , 2005, ICER '05.

[36]  Stephen H. Edwards,et al.  Web-CAT: automatically grading programming assignments , 2008, ITiCSE.

[37]  K. Reid,et al.  Learning by doing: introducing version control as a way to manage student assignments , 2005, SIGCSE '05.

[38]  Anneli Eteläpelto Metacognition and the Expertise of Computer Program Comprehension , 1993 .

[39]  Nick McKeown,et al.  The PyramidSnapshot Challenge: Understanding Student Process from Visual Output of Programs , 2019, SIGCSE.

[40]  Tracey Bretag,et al.  Challenges in Addressing Plagiarism in Education , 2013, PLoS medicine.

[41]  D. Boud,et al.  Rethinking models of feedback for learning: the challenge of design , 2013 .

[42]  Eric Roberts,et al.  Using undergraduates as teaching assistants in introductory programming courses: an update on the Stanford experience , 1995, SIGCSE.