EmoTour: Multimodal Emotion Recognition using Physiological and Audio-Visual Features
暂无分享,去创建一个
Yutaka Arakawa | Keiichi Yasumoto | Wolfgang Minker | Yuta Takahashi | Yuki Matsuda | Dmitrii Fedotov
[1] Carlos Busso,et al. The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations , 2015, Language Resources and Evaluation.
[2] Andreas Bulling,et al. Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction , 2014, UbiComp Adjunct.
[3] J. Russell. A circumplex model of affect. , 1980 .
[4] Steffen Leonhardt,et al. Automatic Step Detection in the Accelerometer Signal , 2007, BSN.
[5] Dong Yu,et al. Speech emotion recognition using deep neural network and extreme learning machine , 2014, INTERSPEECH.
[6] Björn Schuller,et al. Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.
[7] Maria Konstantinova,et al. RAMAS: Russian Multimodal Corpus of Dyadic Interaction for studying emotion recognition , 2018 .
[8] P. Ekman,et al. Facial action coding system , 2019 .
[9] George Trigeorgis,et al. End-to-End Multimodal Emotion Recognition Using Deep Neural Networks , 2017, IEEE Journal of Selected Topics in Signal Processing.
[10] Anurag Mittal,et al. Bi-modal First Impressions Recognition Using Temporally Ordered Deep Audio and Stochastic Visual Features , 2016, ECCV Workshops.
[11] Eman M. G. Younis,et al. Towards unravelling the relationship between on-body, environmental and emotion data using sensor information fusion approach , 2018, Inf. Fusion.
[12] Yuanxi Li,et al. Beyond Big Data of Human Behaviors: Modeling Human Behaviors and Deep Emotions , 2018, 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR).
[13] Haizhou Li,et al. Mobile acoustic Emotion Recognition , 2016, 2016 IEEE Region 10 Conference (TENCON).
[14] Yutaka Arakawa,et al. SenStick: Comprehensive Sensing Platform with an Ultra Tiny All-In-One Sensor Board for IoT Research , 2017, J. Sensors.
[15] M. Bradley,et al. The pupil as a measure of emotional arousal and autonomic activation. , 2008, Psychophysiology.
[16] Fabien Ringeval,et al. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions , 2013, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).
[17] P. Ekman,et al. What the face reveals : basic and applied studies of spontaneous expression using the facial action coding system (FACS) , 2005 .
[18] Thierry Pun,et al. Multimodal Emotion Recognition in Response to Videos , 2012, IEEE Transactions on Affective Computing.
[19] Maja Pantic,et al. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING , 2022 .
[20] Peter Robinson,et al. OpenFace: An open source facial behavior analysis toolkit , 2016, 2016 IEEE Winter Conference on Applications of Computer Vision (WACV).
[21] Tingshao Zhu,et al. Emotion recognition based on customized smart bracelet with built-in accelerometer , 2016, PeerJ.
[22] Albert Ali Salah,et al. Robust Acoustic Emotion Recognition Based on Cascaded Normalization and Extreme Learning Machines , 2016, ISNN.
[23] Bao-Liang Lu,et al. Multimodal emotion recognition using EEG and eye tracking data , 2014, 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.
[24] Andrzej Majkowski,et al. Emotion recognition using facial expressions , 2017, ICCS.
[25] Fabien Ringeval,et al. The INTERSPEECH 2014 computational paralinguistics challenge: cognitive & physical load , 2014, INTERSPEECH.