Formative writing systems with automated scoring provide opportunities for students to write, receive feedback, and then revise essays in a timely iterative cycle. This paper describes ongoing investigations of a formative writing tool through mining student data in order to understand how the system performs and to measure improvement in student writing. The sampled data included over 1.3M student essays written in response to approximately 200 pre-defined prompts as well as a log of all student actions and computer generated feedback. Analyses both measured and modeled changes in student performance over revisions, the effects of system responses and the amount of time students spent working on assignments. Implications are discussed for employing large-scale data analytics to improve educational outcomes, to understand the role of feedback in writing, to drive improvements in formative technology and to aid in designing better kinds of feedback and scaffolding to support students in the writing process.
[1]
Peter W. Foltz,et al.
Implementation and Applications of the Intelligent Essay Assessor
,
2013
.
[2]
Identifiers California,et al.
Annual Meeting of the National Council on Measurement in Education
,
1998
.
[3]
Martin Chodorow,et al.
Automated Essay Evaluation: The Criterion Online Writing Service
,
2004,
AI Mag..
[4]
Beata Beigman Klebanov,et al.
Automated Essay Scoring
,
2021,
Synthesis Lectures on Human Language Technologies.
[5]
Ben Hamner,et al.
Contrasting state-of-the-art automated scoring of essays: analysis
,
2012
.