Comparison of DCT and autoencoder-based features for DNN-HMM multimodal silent speech recognition
暂无分享,去创建一个
Bruce Denby | Licheng Liu | Yan Ji | Hongcui Wang | B. Denby | Hongcui Wang | Yan Ji | Licheng Liu
[1] Carla Teixeira Lopes,et al. TIMIT Acoustic-Phonetic Continuous Speech Corpus , 2012 .
[2] Jun Cai,et al. Recognition and Real Time Performances of a Lightweight Ultrasound Based Silent Speech Interface Employing a Language Model , 2011, INTERSPEECH.
[3] J. M. Gilbert,et al. Silent speech interfaces , 2010, Speech Commun..
[4] Tanja Schultz,et al. A Spectral Mapping Method for EMG-based Recognition of Silent Speech , 2010, B-Interface.
[5] P. Yip,et al. Discrete Cosine Transform: Algorithms, Advantages, Applications , 1990 .
[6] Laurent Girin,et al. Real-time control of a DNN-based articulatory synthesizer for silent speech conversion: a pilot study , 2015, INTERSPEECH.
[7] Jun Wang,et al. Silent speech recognition from articulatory movements using deep neural network , 2015, ICPhS.
[8] Gérard Chollet,et al. Acquisition of Ultrasound, Video and Acoustic Speech Data for a Silent-Speech Interface Application , 2008 .
[9] Geoffrey E. Hinton. A Practical Guide to Training Restricted Boltzmann Machines , 2012, Neural Networks: Tricks of the Trade.