This paper reports on the fi rst stage of a research project which ultimately aims to automatically score performance on an Elicited Imitation (EI) test. In a standard EI test, the second language (L2) learner hears and then imitates sentences of varying diffi culty. As the length or complexity of items increases, learners fi nd it more diffi cult to imitate and the kinds of errors they make reveal the characteristics of their grammatical ability. EI tests have attracted attention in recent years as ways of assessing L2 learners’ productive grammatical ability, but the time it takes to score such tests manually severely limits the applications to which they can be put. If automatic scoring becomes possible, immediate feedback becomes a reality, and the usefulness of the tests greatly increases. The immediate aim of the fi rst stage in this research project is to build a database of audio fi les of Japanese university students recording English elicited imitation test items and performing on elicited imitation tests. This database will then be used in the next stage of the research to see if an open source automatic speech recognition (ASR) tool can be used to reliably score students’ test performance. Elicited Imitation Tests
[1]
Carl Christensen,et al.
Automating the scoring of elicited imitation tests
,
2011,
MLSLP.
[2]
Deryle W. Lonsdale,et al.
Elicited Imitation as an Oral Proficiency Measure with ASR Scoring
,
2008,
LREC.
[3]
Carl Christensen,et al.
Principled Construction of Elicited Imitation Tests
,
2010,
LREC.
[4]
Thomas Niesler,et al.
Readability index as a design criterion for elicited imitation tasks in automatic oral proficiency assessment
,
2011,
SLaTE.
[5]
R. Erlam.
Elicited Imitation as a Measure of L2 Implicit Knowledge: An Empirical Validation Study
,
2006
.
[6]
Rod Ellis.
2. Measuring Implicit and Explicit Knowledge of a Second Language
,
2009
.