Determining optimal signal features and parameters for HMM-based emotion classification
暂无分享,去创建一个
[1] Wolfgang Wahlster,et al. SmartKom: Foundations of Multimodal Dialogue Systems , 2006, SmartKom.
[2] Elmar Nöth,et al. We are not amused - but how do you know? user states in a multi-modal dialogue system , 2003, INTERSPEECH.
[3] Ioannis Pitas,et al. The eNTERFACE05 Audio-Visual Emotion Database , 2006, 22nd International Conference on Data Engineering Workshops (ICDEW'06).
[4] Anton Batliner. Whence and Whither: The Automatic Recognition of Emotions in Speech (Invited Keynote) , 2008, PIT.
[5] Astrid Paeschke,et al. A database of German emotional speech , 2005, INTERSPEECH.
[6] Andreas Wendemuth,et al. Processing affected speech within human machine interaction , 2009, INTERSPEECH.
[7] Elisabeth André,et al. Perception in Multimodal Dialogue Systems, 4th IEEE Tutorial and Research Workshop on Perception and Interactive Technologies for Speech-Based Systems, PIT 2008, Kloster Irsee, Germany, June 16-18, 2008, Proceedings , 2008, PIT.
[8] Björn W. Schuller,et al. On the Influence of Phonetic Content Variation for Acoustic Emotion Recognition , 2008, PIT.
[9] Steve Young,et al. The HTK book , 1995 .
[10] Wolfgang Wahlster,et al. SmartKom: Foundations of Multimodal Dialogue Systems (Cognitive Technologies) , 2006 .