EmoTour: Estimating Emotion and Satisfaction of Users Based on Behavioral Cues and Audiovisual Data
暂无分享,去创建一个
Yutaka Arakawa | Keiichi Yasumoto | Wolfgang Minker | Yuta Takahashi | Yuki Matsuda | Dmitrii Fedotov | W. Minker | K. Yasumoto | Yuki Matsuda | D. Fedotov | Yutaka Arakawa | Yuta Takahashi
[1] Peter Zeile,et al. Urban Emotions - Geo-Semantic Emotion Extraction from Technical Sensors, Human Sensors and Crowdsourced Data , 2014, LBS.
[2] D. Isaacowitz,et al. Cultural differences in gaze and emotion recognition: Americans contrast more than Chinese. , 2013, Emotion.
[3] Haizhou Li,et al. Mobile acoustic Emotion Recognition , 2016, 2016 IEEE Region 10 Conference (TENCON).
[4] Joaquín Alegre,et al. Tourist satisfaction and dissatisfaction. , 2010 .
[5] Yutaka Arakawa,et al. SenStick: Comprehensive Sensing Platform with an Ultra Tiny All-In-One Sensor Board for IoT Research , 2017, J. Sensors.
[6] Jesse Hoey,et al. From individual to group-level emotion recognition: EmotiW 5.0 , 2017, ICMI.
[7] Keiichi Yasumoto,et al. SakuraSensor: quasi-realtime cherry-lined roads detection through participatory video sensing by cars , 2015, UbiComp.
[8] Yevgeni Koucheryavy,et al. IoT Use Cases in Healthcare and Tourism , 2015, 2015 IEEE 17th Conference on Business Informatics.
[9] Peter Robinson,et al. OpenFace: An open source facial behavior analysis toolkit , 2016, 2016 IEEE Winter Conference on Applications of Computer Vision (WACV).
[10] Fabien Ringeval,et al. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions , 2013, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).
[11] Shivani Nagalkar,et al. Emotion recognition using facial expressions , 2019 .
[12] Bao-Liang Lu,et al. Multimodal emotion recognition using EEG and eye tracking data , 2014, 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.
[13] Björn Schuller,et al. Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.
[14] P. Ekman,et al. What the face reveals : basic and applied studies of spontaneous expression using the facial action coding system (FACS) , 2005 .
[15] Stefan Ultes,et al. Adaptive dialogue management in the KRISTINA project for multicultural health care applications , 2015 .
[16] Carlos Busso,et al. The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations , 2015, Language Resources and Evaluation.
[17] Mohammad Soleymani,et al. Analysis of EEG Signals and Facial Expressions for Continuous Emotion Detection , 2016, IEEE Transactions on Affective Computing.
[18] J. Russell. A circumplex model of affect. , 1980 .
[19] Steffen Leonhardt,et al. Automatic Step Detection in the Accelerometer Signal , 2007, BSN.
[20] Yutaka Arakawa,et al. EmoTour: Multimodal Emotion Recognition using Physiological and Audio-Visual Features , 2018, UbiComp/ISWC Adjunct.
[21] Nakazawa Jin,et al. MOLMOD: Analysis of Feelings based on Vital Information for Mood Acquisition , 2009 .
[22] George Trigeorgis,et al. End-to-End Multimodal Emotion Recognition Using Deep Neural Networks , 2017, IEEE Journal of Selected Topics in Signal Processing.
[23] Anurag Mittal,et al. Bi-modal First Impressions Recognition Using Temporally Ordered Deep Audio and Stochastic Visual Features , 2016, ECCV Workshops.
[24] Dong Yu,et al. Speech emotion recognition using deep neural network and extreme learning machine , 2014, INTERSPEECH.
[25] Eman M. G. Younis,et al. Towards unravelling the relationship between on-body, environmental and emotion data using sensor information fusion approach , 2018, Inf. Fusion.
[26] Andreas Bulling,et al. Pupil: an open source platform for pervasive eye tracking and mobile gaze-based interaction , 2014, UbiComp Adjunct.
[27] Ching-Fu Chen,et al. Experience quality, perceived value, satisfaction and behavioral intentions for heritage tourists , 2010 .
[28] S. Kanoh,et al. Development of an eyewear to measure eye and body movements , 2015, 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).
[29] Yutaka Arakawa,et al. Towards Estimating Emotions and Satisfaction Level of Tourist Based on Eye Gaze and Head Movement , 2018, 2018 IEEE International Conference on Smart Computing (SMARTCOMP).
[30] Leontios J. Hadjileontiadis,et al. Emotion Recognition from Brain Signals Using Hybrid Adaptive Filtering and Higher Order Crossings Analysis , 2010, IEEE Transactions on Affective Computing.
[31] Thierry Pun,et al. Multimodal Emotion Recognition in Response to Videos , 2012, IEEE Transactions on Affective Computing.
[32] Maja Pantic,et al. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING , 2022 .
[33] Ping Hu,et al. Learning supervised scoring ensemble for emotion recognition in the wild , 2017, ICMI.
[34] Yuan-Pin Lin,et al. EEG-based emotion recognition in music listening: A comparison of schemes for multiclass support vector machine , 2009, 2009 IEEE International Conference on Acoustics, Speech and Signal Processing.
[35] Juliana Miehle,et al. What Causes the Differences in Communication Styles? A Multicultural Study on Directness and Elaborateness , 2018, LREC.
[36] Fabien Ringeval,et al. The INTERSPEECH 2014 computational paralinguistics challenge: cognitive & physical load , 2014, INTERSPEECH.
[37] Wolfgang Minker,et al. Contextual Dependencies in Time-Continuous Multidimensional Affect Recognition , 2018, LREC.
[38] Mohammad Mahdi Ghassemi,et al. Predicting Latent Narrative Mood Using Audio and Physiologic Data , 2017, AAAI.
[39] Chung-Hsien Wu,et al. Survey on audiovisual emotion recognition: databases, features, and data fusion strategies , 2014, APSIPA Transactions on Signal and Information Processing.
[40] Wolfgang Minker,et al. Emotion Recognition in Real-world Conditions with Acoustic and Visual Features , 2014, ICMI.
[41] Mike Thelwall,et al. Seeing Stars of Valence and Arousal in Blog Posts , 2013, IEEE Transactions on Affective Computing.
[42] Albert Ali Salah,et al. Robust Acoustic Emotion Recognition Based on Cascaded Normalization and Extreme Learning Machines , 2016, ISNN.
[43] Jean-Philippe Thiran,et al. Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data , 2015, Pattern Recognit. Lett..
[44] Olga Perepelkina,et al. RAMAS: Russian Multimodal Corpus of Dyadic Interaction for Affective Computing , 2018, SPECOM.
[45] Imran A. Zualkernan,et al. Emotion recognition using mobile phones , 2016, 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom).