Districtwide Implementations Outperform Isolated Use of Automated Feedback in High School Writing

This paper describes a large-scale evaluation of automated writing evaluation in classroom, non-high-stakes settings. Thirty-three high schools in California se of an educational product, Turnitin Revision Assistant, during the 2016-2017 school year. We demonstrate moderate evidence of growth in student outcomes based on this usage in general, exceeding rates of improvement statewide. We empirically demonstrate that broader adoption across buildings within a school district is correlated with stronger outcomes, and discuss implementation steps that can support those broad adoptions. Finally, we replicate this finding in a new context, with a full district adoption case study in Georgia, comprising ten schools.

[1]  Vincent Aleven,et al.  An effective metacognitive strategy: learning by doing and explaining with a computer-based Cognitive Tutor , 2002, Cogn. Sci..

[2]  M. Warschauer,et al.  Technology and Equity in Schooling: Deconstructing the Digital Divide , 2004 .

[3]  Boya Ma,et al.  Comparative Effectiveness of Carnegie Learning's "Cognitive Tutor Bridge to Algebra" Curriculum: A Report of a Randomized Experiment in the Maui School District. Research Report. , 2007 .

[4]  Daniel F. McCaffrey,et al.  Teacher Pay For Performance: Experimental Evidence from the Project on Incentives in Teaching , 2010 .

[5]  Brent Bridgeman,et al.  Performance of a Generic Approach in Automated Essay Scoring , 2010 .

[6]  Mark Warschauer,et al.  Utility in a Fallible Tool: A Multi-Site Case Study of Automated Writing Evaluation. , 2010 .

[7]  Catrice Barrett,et al.  Understanding English language variation in U.S. schools , 2011 .

[8]  Ryan Shaun Joazeiro de Baker,et al.  Collaboration in cognitive tutor use in latin America: field study and design recommendations , 2012, CHI.

[9]  David M. Williamson,et al.  EVALUATION OF THE E‐RATER® SCORING ENGINE FOR THE GRE® ISSUE AND ARGUMENT PROMPTS , 2012 .

[10]  Rod D. Roscoe,et al.  Writing pal: Feasibility of an intelligent writing strategy tutor in the high school classroom , 2013 .

[11]  Mark D. Shermis,et al.  State-of-the-art automated essay scoring: Competition, results, and future directions from a United States demonstration , 2014 .

[12]  Joshua Wilson,et al.  Automated essay evaluation software in English Language Arts classrooms: Effects on teacher feedback, student motivation, and writing quality , 2016, Comput. Educ..

[13]  Ezekiel Dixon-Román,et al.  The computational turn in education research: Critical and creative perspectives on the digital data deluge , 2017 .

[14]  Arvind Narayanan,et al.  Semantics derived automatically from language corpora contain human-like biases , 2016, Science.

[15]  Bronwyn Woods,et al.  Formative Essay Feedback Using Predictive Scoring Models , 2017, KDD.

[16]  Elijah Mayfield,et al.  Trustworthy Automated Essay Scoring without Explicit Construct Validity , 2018, AAAI Spring Symposia.

[17]  Elijah Mayfield,et al.  Beyond Automated Essay Scoring : Forecasting and Improving Outcomes in Middle and High School Writing , 2018 .