Learner Fit in Scaling Up Automated Writing Evaluation

Valid evaluations of automated writing evaluation AWE design, development, and implementation should integrate the learners' perspective in order to ensure the attainment of desired outcomes. This paper explores the learner fit quality of the Research Writing Tutor RWT, an emerging AWE tool tested with L2 writers at an early stage of its development. Employing a mixed-methods approach, the authors sought to answer questions regarding the nature of learners' interactional modifications with RWT and their perceptions of appropriateness of its feedback about the communicative effectiveness of research article Introductions discourse. The findings reveal that RWT's move, step, and sentence-level feedback provides various opportunities for learners to engage with the revision task at a useful level of difficulty and to stimulate interaction appropriate to their individual characteristics. The authors also discuss insights about usefulness, user-friendliness, and trust as important concepts inherent to appropriateness.

[1]  Midori Kimura Digital Storytelling and Oral Fluency in an English Reading Class at a Japanese University , 2012, Int. J. Comput. Assist. Lang. Learn. Teach..

[2]  S. Gass,et al.  Input, Interaction, and Output in Second Language Acquisition , 2014 .

[3]  J. Swales Aspects of article introductions , 2011 .

[4]  Yigal Attali,et al.  Exploring the Feedback and Revision Features of Criterion , 2004 .

[5]  Peter W. Foltz,et al.  The intelligent essay assessor: Applications to educational technology , 1999 .

[6]  P. Skehan Individual Differences in Second Language Learning , 1989, Studies in Second Language Acquisition.

[7]  Cathy Collins Block,et al.  Comprehension Instruction: Research-Based Best Practices , 2001 .

[8]  Yuping Wang,et al.  Task Design in Videoconferencing-Supported Distance Language Learning. , 2013 .

[9]  Hsien-Chin Liou,et al.  How Wiki-Based Writing Influences College Students' Collaborative and Individual Composing Products, Processes, and Learners' Perceptions , 2011, Int. J. Comput. Assist. Lang. Learn. Teach..

[10]  M. Warschauer,et al.  Automated Writing Assessment in the Classroom , 2008 .

[11]  J M Morse,et al.  Approaches to qualitative-quantitative methodological triangulation. , 1991, Nursing research.

[12]  C. Geertz,et al.  The Interpretation of Cultures , 1973 .

[13]  Elena Cotos Designing an intelligent discourse evaluation tool: Theoretical, empirical, and technological considerations , 2009 .

[14]  Mark Warschauer,et al.  Automated writing evaluation: defining the classroom research agenda , 2006 .

[15]  Julie Cheville,et al.  Automated Scoring Technologies and the Rising Influence of Error. , 2004 .

[16]  N. M. Terhune,et al.  Learning to Learn Digitally: Getting Students on the Road to Autonomy , 2013, Int. J. Comput. Assist. Lang. Learn. Teach..

[17]  Ingrid Nix,et al.  Exploring design features to enhance computer-based assessment: Learners' views on using a confidence-indicator tool and computer-based feedback , 2011, Br. J. Educ. Technol..

[18]  Chi-Fen Emily Chen,et al.  Beyond the Design of Automated Writing Evaluation: Pedagogical Practices and Perceived Learning Effectiveness in EFL Writing Classes. , 2008 .

[19]  Philip Hubbard,et al.  An Integrated Framework for CALL Courseware Evaluation , 2013 .

[20]  J. Swales Research Genres: Explorations and Applications , 2004 .

[21]  Elena Cotos,et al.  Automated writing evaluation for non-native speaker english academic writing: the case of iade and its formative feedback , 2010 .

[22]  Kai Li,et al.  Development and Evaluation of a Feedback Support System with Audio and Playback Strokes , 2008, CALICO Journal.

[23]  Steve Graham,et al.  Strategy Instruction and the Teaching of Writing: A Meta-Analysis. , 2006 .

[24]  Su-Chao Chang,et al.  An empirical investigation of students' behavioural intentions to use the online learning course websites , 2007, Br. J. Educ. Technol..

[25]  Elena Cotos,et al.  Potential of Automated Writing Evaluation Feedback. , 2011 .

[26]  Peter Skehan,et al.  Chapter 18. Individual Differences in Second Language Learning , 2008 .

[27]  Tony Houston Retrofitted Materials for WebCT: Guidelines for Authors, Web Designers, and Users , 2013 .

[28]  J. Rock,et al.  THE IMPACT OF SHORT‐TERM USE OF CRITERIONSM ON WRITING SKILLS IN NINTH GRADE , 2007 .

[29]  Joan-Tomás Pujolă,et al.  CALLing for help: researching language learning strategies using help facilities in a web-based multimedia program , 2002, ReCALL.

[30]  J. Schroeder,et al.  The Impact of Criterion Writing Evaluation Technology on Criminal Justice Student Writing Skills , 2008 .

[31]  John W. Creswell,et al.  Designing and Conducting Mixed Methods Research , 2006 .

[32]  Elena Cotos,et al.  Towards Effective Integration and Positive Impact of Automated Writing Evaluation in L2 Writing , 2012 .

[33]  Felicia Zhang,et al.  Handbook of Research on Computer-Enhanced Language Acquisition and Learning , 2008 .

[34]  Shuwen Wang,et al.  A Case Study on the Efficacy of Error Correction Practice by Using the Automated Writing Evaluation System WRM 2.0 on Chinese College Students' English Writing , 2011, 2011 International Conference on Computational and Information Sciences.

[35]  Ken Hyland,et al.  Second Language Writing , 2003 .

[36]  Semire Dikli,et al.  An Overview of Automated Scoring of Essays. , 2006 .

[37]  Trude Heift,et al.  Learner Control and Error Correction in ICALL: Browsers, Peekers, and Adamants , 2013 .

[38]  Mark Warschauer,et al.  Utility in a Fallible Tool: A Multi-Site Case Study of Automated Writing Evaluation. , 2010 .

[39]  Wendy Fox-Turnbull Autophotography: A Means of Stimulated Recall for Investigating Technology Education , 2011 .

[40]  Ken Hyland,et al.  Feedback on second language students' writing , 2006, Language Teaching.