Analysis of a Large-Scale Formative Writing Assessment System with Automated Feedback

Formative writing systems with automated scoring provide opportunities for students to write, receive feedback, and then revise essays in a timely iterative cycle. This paper describes ongoing investigations of a formative writing tool through mining student data in order to understand how the system performs and to measure improvement in student writing. The sampled data included over 1.3M student essays written in response to approximately 200 pre-defined prompts as well as a log of all student actions and computer generated feedback. Analyses both measured and modeled changes in student performance over revisions, the effects of system responses and the amount of time students spent working on assignments. Implications are discussed for employing large-scale data analytics to improve educational outcomes, to understand the role of feedback in writing, to drive improvements in formative technology and to aid in designing better kinds of feedback and scaffolding to support students in the writing process.