Vowel articulation training aid for the deaf

The signal processing strategy for a computer-based vowel articulation training aid for the deaf is described. This processing is based on a nonlinear/linear artificial neural network transformation of 16-channel filter bank data to a two-dimensional space which approximates an F1/F2 space. Speaker-independent vowel training displays have been developed with vowel identity cued by spatial location and color. Testing with both normally-hearing and hearing-impared listeners indicates that the display is very easy to interpret and that the relationship between the pattern and the spoken vowel is consistent. The continuous relationship between phonetic perception and display patterns provides feedback for fine tuning of vocal tract settings.<<ETX>>