A New Interface for Affective State Estimation and Annotation from Speech
暂无分享,去创建一个
[1] Yi-Ping Phoebe Chen,et al. Acoustic feature selection for automatic emotion recognition from speech , 2009, Inf. Process. Manag..
[2] Engin Erzin,et al. JESTKOD database: Dyadic interaction analysis , 2015, 2015 23nd Signal Processing and Communications Applications Conference (SIU).
[3] Victor Zue,et al. Conversational interfaces: advances and challenges , 1997, Proceedings of the IEEE.
[4] Vladlen Koltun,et al. Multi-Scale Context Aggregation by Dilated Convolutions , 2015, ICLR.
[5] Shrikanth Narayanan,et al. The USC Creative IT Database: A Multimodal Database of Theatrical Improvisation , 2010 .
[6] Björn W. Schuller,et al. Speaker independent emotion recognition by early fusion of acoustic and linguistic features within ensembles , 2005, INTERSPEECH.
[7] Victor Zue,et al. GALAXY: a human-language interface to on-line travel information , 1994, ICSLP.
[8] Engin Erzin,et al. Use of Agreement/Disagreement Classification in Dyadic Interactions for Continuous Emotion Recognition , 2016, INTERSPEECH.
[9] Carlos Busso,et al. The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations , 2015, Language Resources and Evaluation.
[10] Gerasimos Potamianos,et al. A hierarchical approach with feature selection for emotion recognition from speech , 2012, LREC.