Investigations on Speaking Mode Discrepancies in EMG-Based Speech Recognition
暂无分享,去创建一个
[1] Herbert Gish,et al. Understanding and improving speech recognition performance through the use of diagnostic tools , 1995, 1995 International Conference on Acoustics, Speech, and Signal Processing.
[2] Tanja Schultz,et al. Modeling coarticulation in EMG-based continuous speech recognition , 2010, Speech Commun..
[3] Tanja Schultz,et al. Impact of different speaking modes on EMG-based speech recognition , 2009, INTERSPEECH.
[4] Tanja Schultz,et al. Impact of lack of acoustic feedback in EMG-based silent speech recognition , 2010, INTERSPEECH.
[5] Tanja Schultz,et al. A Spectral Mapping Method for EMG-based Recognition of Silent Speech , 2010, B-Interface.
[6] Florian Metze,et al. Analysis of gender normalization using MLP and VTLN features , 2010, INTERSPEECH.
[7] Michael Finke,et al. Wide context acoustic modeling in read vs. spontaneous speech , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.
[8] Michael Schünke,et al. Kopf und Neuroanatomie , 2006 .
[9] Florian Metze,et al. A flexible stream architecture for ASR using articulatory features , 2002, INTERSPEECH.
[10] Tanja Schultz,et al. Towards continuous speech recognition using surface electromyography , 2006, INTERSPEECH.
[11] L. Maier-Hein,et al. Session independent non-audible speech recognition using surface electromyography , 2005, IEEE Workshop on Automatic Speech Recognition and Understanding, 2005..
[12] J. M. Gilbert,et al. Silent speech interfaces , 2010, Speech Commun..