Visual information and redundancy conveyed by internal articulator dynamics in synthetic audiovisual speech
暂无分享,去创建一个
[1] Thierry Dutoit,et al. The MBROLA project: towards a set of high quality speech synthesizers free of use for non commercial purposes , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.
[2] Sascha Fagel,et al. An articulation model for audiovisual speech synthesis - Determination, adjustment, evaluation , 2004, Speech Commun..
[3] Bernd J. Kröger. Ein phonetisches Modell der Sprachproduktion , 1998 .
[4] Anders Löfqvist,et al. Speech as Audible Gestures , 1990 .
[5] W. H. Sumby,et al. Visual contribution to speech intelligibility in noise , 1954 .
[6] Q Summerfield,et al. Use of Visual Information for Phonetic Perception , 1979, Phonetica.
[7] Sascha Fagel. Audiovisuelle Sprachsynthese: Systementwicklung und -bewertung , 2004 .
[8] Sascha Fagel,et al. Crossmodal Integration and McGurk-Effect in Synthetic Audiovisual Speech , 2006 .
[9] Joanna Light,et al. Using visible speech to train perception and production of speech for individuals with hearing loss. , 2004, Journal of speech, language, and hearing research : JSLHR.
[10] Jonas Beskow,et al. Recent Developments In Facial Animation: An Inside View , 1998, AVSP.