A hierarchical framework for modeling multimodality and emotional evolution in affective dialogs
暂无分享,去创建一个
[1] Björn W. Schuller,et al. Context-sensitive multimodal emotion recognition from speech and facial expression using bidirectional LSTM modeling , 2010, INTERSPEECH.
[2] Qi Tian,et al. Feature selection using principal feature analysis , 2007, ACM Multimedia.
[3] Carlos Busso,et al. IEMOCAP: interactive emotional dyadic motion capture database , 2008, Lang. Resour. Evaluation.
[4] Prashant Lahane,et al. Emotion Recognition from Facial Expressions using Multilevel HMM ( , 2014 .
[5] Carlos Busso,et al. Modeling mutual influence of interlocutor emotion states in dyadic spoken interactions , 2009, INTERSPEECH.
[6] Nicu Sebe,et al. MULTIMODAL EMOTION RECOGNITION , 2005 .
[7] Carlos Busso,et al. Visual emotion recognition using compact facial representations and viseme information , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.
[8] A. Rogier. [Communication without words]. , 1971, Tijdschrift voor ziekenverpleging.
[9] Loïc Kessous,et al. Multimodal emotion recognition from expressive faces, body gestures and speech , 2007, AIAI.
[10] Cristina Conati,et al. Probabilistic assessment of user's emotions in educational games , 2002, Appl. Artif. Intell..
[11] Athanasios Katsamanis,et al. Estimation of ordinal approach-avoidance labels in dyadic interactions: Ordinal logistic regression approach , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).