An EMA-based articulatory feedback approach to facilitate L2 speech production learning

When acquiring a second language (L2), learners have difficulty in achieving native-like production even if they receive instruction on how to position the speech articulators for correct production. A principal reason is that learners lack information on how to modify their articulation to produce correct L2 sounds. A visual feedback method using electromagnetic articulography (EMA) has been previously implemented for this application with some success [Levitt et al. (2010)]. However, because this approach provided tongue tip position only, it is unsuitable for vowels and many consonants. In this work, we have developed a more general EMA-based articulatory feedback system that provides real-time visual feedback of multiple head movement-corrected sensor positions, together with target articulatory positions specific to each learner. We have used this system to improve the production of the unfamiliar vowel /ae/ for Japanese learners of American English. We predicted an appropriate speaker-specific /ae/ position for each Japanese learner using a model trained on previously collected kinematic data from 49 native speakers of American English, based on vowel positions for the overlapping /iy/, /aa/, and /uw/ vowels found in both languages. Results comparing formants pre- and post-feedback training will be presented to show the efficacy of the approach.