Automated Scoring for Elicited Imitation Tests

This paper reports on the fi rst stage of a research project which ultimately aims to automatically score performance on an Elicited Imitation (EI) test. In a standard EI test, the second language (L2) learner hears and then imitates sentences of varying diffi culty. As the length or complexity of items increases, learners fi nd it more diffi cult to imitate and the kinds of errors they make reveal the characteristics of their grammatical ability. EI tests have attracted attention in recent years as ways of assessing L2 learners’ productive grammatical ability, but the time it takes to score such tests manually severely limits the applications to which they can be put. If automatic scoring becomes possible, immediate feedback becomes a reality, and the usefulness of the tests greatly increases. The immediate aim of the fi rst stage in this research project is to build a database of audio fi les of Japanese university students recording English elicited imitation test items and performing on elicited imitation tests. This database will then be used in the next stage of the research to see if an open source automatic speech recognition (ASR) tool can be used to reliably score students’ test performance. Elicited Imitation Tests