Beyond the Design of Automated Writing Evaluation: Pedagogical Practices and Perceived Learning Effectiveness in EFL Writing Classes.

Automated writing evaluation (AWE) software is designed to provide instant computergenerated scores for a submitted essay along with diagnostic feedback. Most studies on AWE have been conducted on psychometric evaluations of its validity; however, studies on how effectively AWE is used in writing classes as a pedagogical tool are limited. This study employs a naturalistic classroom-based approach to explore the interaction between how an AWE program, MY Access!, was implemented in three different ways in three EFL college writing classes in Taiwan and how students perceived its effectiveness in improving writing. The findings show that, although the implementation of AWE was not in general perceived very positively by the three classes, it was perceived comparatively more favorably when the program was used to facilitate students’ early drafting and revising process, followed by human feedback from both the teacher and peers during the later process. This study also reveals that the autonomous use of AWE as a surrogate writing coach with minimal human facilitation caused frustration to students and limited their learning of writing. In addition, teachers’ attitudes toward AWE use and their technology-use skills, as well as students’ learner characteristics and goals for learning to write, may also play vital roles in determining the effectiveness of AWE. With limitations inherent in the design of AWE technology, language teachers need to be more critically aware that the implementation of AWE requires well thought-out pedagogical designs and thorough considerations for its relevance to the objectives of the learning of writing.

[1]  E. B. Page Project Essay Grade: PEG. , 2003 .

[2]  W. Grabe,et al.  Theory and Practice of Writing: An Applied Linguistic Perspective , 1998 .

[3]  Semire Dikli,et al.  An Overview of Automated Scoring of Essays. , 2006 .

[4]  Yigal Attali,et al.  Exploring the Feedback and Revision Features of Criterion , 2004 .

[5]  Title Feedback on second language students ' writing , 2006 .

[6]  Richard H. Haswell,et al.  Machine Scoring of Student Essays , 2006 .

[7]  A. Herrington,et al.  What Happens When Machines Read Our Students' Writing? , 2001 .

[8]  Julie Cheville,et al.  Automated Scoring Technologies and the Rising Influence of Error. , 2004 .

[9]  Glenn Stockwell A review of technology choice for teaching language skills and areas in the CALL literature , 2007, ReCALL.

[10]  P. Black,et al.  Assessment and Classroom Learning , 1998 .

[11]  C. Weir Language Testing and Validation: An Evidence-Based Approach , 2004 .

[12]  Richard H. Haswell,et al.  Machine Scoring of Student Essays , 2006 .

[13]  Mike Levy,et al.  REVIEW OF CALL DIMENSIONS: OPTIONS AND ISSUES IN COMPUTER-ASSISTED LANGUAGE LEARNING CALL Dimensions: Options and Issues in Computer-Assisted Language Learning , 2007 .

[14]  Ken Hyland,et al.  Feedback on second language students' writing , 2006, Language Teaching.

[15]  Mark Warschauer,et al.  Automated writing evaluation: defining the classroom research agenda , 2006 .

[16]  Jill Burstein,et al.  Automated Essay Scoring : A Cross-disciplinary Perspective , 2003 .

[17]  Mark Warschauer,et al.  Automated Essay Scoring in the Classroom , 2006 .

[18]  Jill Burstein,et al.  AUTOMATED ESSAY SCORING WITH E‐RATER® V.2.0 , 2004 .

[19]  Martin Chodorow,et al.  Stumping e-rater: challenging the validity of automated essay scoring , 2002, Comput. Hum. Behav..

[20]  Mike Levy,et al.  Call Dimensions: Options and Issues in Computer Assisted Language Learning (ESL & Applied Linguistics Professional Series) , 2006 .

[21]  Ken Beatty,et al.  Teaching and Researching Computer-Assisted Language Learning , 2003 .