HMM-neural network monophone models for computer based articulation training for the hearing impaired
暂无分享,去创建一个
[1] Stephen A. Zahorian,et al. Vowel classification for computer-based visual feedback for speech training for the hearing impaired , 2002, INTERSPEECH.
[2] S. Zahorian,et al. Spectral-shape features versus formants as acoustic correlates for vowels. , 1993, The Journal of the Acoustical Society of America.
[3] Stephen A. Zahorian,et al. A partitioned neural network approach for vowel classification using smoothed time/frequency features , 1999, IEEE Trans. Speech Audio Process..
[4] Stephen A. Zahorian,et al. Yet Another Algorithm for Pitch Tracking , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.
[5] Nikos Fakotakis,et al. Fast endpoint detection algorithm for isolated word recognition in office environment , 1991, [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing.
[6] Stephen A. Zahorian,et al. Personal computer software vowel training aid for the hearing impaired , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).