Vowel articulation training aid for the deaf
暂无分享,去创建一个
The signal processing strategy for a computer-based vowel articulation training aid for the deaf is described. This processing is based on a nonlinear/linear artificial neural network transformation of 16-channel filter bank data to a two-dimensional space which approximates an F1/F2 space. Speaker-independent vowel training displays have been developed with vowel identity cued by spatial location and color. Testing with both normally-hearing and hearing-impared listeners indicates that the display is very easy to interpret and that the relationship between the pattern and the spoken vowel is consistent. The continuous relationship between phonetic perception and display patterns provides feedback for fine tuning of vocal tract settings.<<ETX>>
[1] Johji Tajima. Uniform color scale applications to computer graphics , 1982, Comput. Graph. Image Process..
[2] D J Povel,et al. A computer-controlled vowel corrector for the hearing impaired. , 1986, Journal of speech and hearing research.
[3] Alan V. Oppenheim,et al. Discrete representation of signals , 1972 .
[4] Daniel Ling,et al. Speech and the Hearing-Impaired Child: Theory and Practice , 1976 .