Automating the scoring of elicited imitation tests

This paper explores the role of machine learning in automating the scoring for one kind of spoken language test: elicite d imitation (EI). After sketching the background and rationa le for EI testing, we give a brief overview of EI test results tha t we have collected. To date, the administration and scoring of these tests have been done sequentially and the scoring la tency has not been critically important; our goal now is to automate the test. We show how this implies the need for an adaptive capability at run time, and motivate the need for machine learning in the creation of this kind of test. We discuss our sizable store of data from prior EI test administrations. Then we show various experiments that illustrate how this prior information is useful in predicting student perf ormance. We present simulations designed to foreshadow how well the system will be able to adapt on-the-fly to student responses. Finally, we draw conclusions and mention possible future work.