Analysis of EEG Signals and Facial Expressions for Continuous Emotion Detection
暂无分享,去创建一个
Mohammad Soleymani | Yun Fu | Maja Pantic | Sadjad Asghari-Esfeden | M. Pantic | Y. Fu | M. Soleymani | Sadjad Asghari-Esfeden
[1] Maja Pantic,et al. Implicit human-centered tagging [Social Sciences] , 2009, IEEE Signal Process. Mag..
[2] Mohammad Soleymani,et al. Continuous emotion detection using EEG signals and facial expressions , 2014, 2014 IEEE International Conference on Multimedia and Expo (ICME).
[3] Mohammad Soleymani,et al. A Multimodal Database for Affect Recognition and Implicit Tagging , 2012, IEEE Transactions on Affective Computing.
[4] Nicu Sebe,et al. Exploiting facial expressions for affective video summarisation , 2009, CIVR '09.
[5] E. Schellenberg,et al. Misery loves company: mood-congruent emotional responding to music. , 2011, Emotion.
[6] Subramanian Ramanathan,et al. User-centric Affective Video Tagging from MEG and Peripheral Physiological Responses , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.
[7] Mohammad Soleymani,et al. Queries and tags in affect-based multimedia retrieval , 2009, 2009 IEEE International Conference on Multimedia and Expo.
[8] Roddy Cowie,et al. FEELTRACE: an instrument for recording perceived emotion in real time , 2000 .
[9] Björn W. Schuller,et al. Categorical and dimensional affect analysis in continuous input: Current trends and future directions , 2013, Image Vis. Comput..
[10] Björn W. Schuller,et al. On-line continuous-time music mood regression with deep recurrent neural networks , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[11] Mohammad Soleymani,et al. Corpus Development for Affective Video Indexing , 2012, IEEE Transactions on Multimedia.
[12] Carlos Busso,et al. Analysis and Compensation of the Reaction Lag of Evaluators in Continuous Emotional Annotations , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.
[13] M. Bradley,et al. Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. , 1994, Journal of behavior therapy and experimental psychiatry.
[14] Fernando De la Torre,et al. Supervised Descent Method and Its Applications to Face Alignment , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.
[15] Nicu Sebe,et al. Looking at the viewer: analysing facial activity to detect personal highlights of multimedia contents , 2010, Multimedia Tools and Applications.
[16] R. Davidson. Affective neuroscience and psychophysiology: toward a synthesis. , 2003, Psychophysiology.
[17] Julien Fleureau,et al. Affective Benchmarking of Movies Based on the Physiological Responses of a Real Audience , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.
[18] Ioannis Patras,et al. Fusion of facial expressions and EEG for implicit affective tagging , 2013, Image Vis. Comput..
[19] Yale Song,et al. Learning a sparse codebook of facial and body microexpressions for emotion recognition , 2013, ICMI '13.
[20] Mohammad Soleymani,et al. Affective Characterization of Movie Scenes Based on Content Analysis and Physiological Changes , 2009, Int. J. Semantic Comput..
[21] J. Russell. Culture and the categorization of emotions. , 1991, Psychological bulletin.
[22] Mohammad Soleymani,et al. Highlight Detection in Movie Scenes Through Inter-users, Physiological Linkage , 2013, Social Media Retrieval.
[23] Peter Robinson,et al. Dimensional affect recognition using Continuous Conditional Random Fields , 2013, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).
[24] Fernando Silveira,et al. Predicting audience responses to movie content from electro-dermal activity signals , 2013, UbiComp.
[25] Hatice Gunes,et al. Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space , 2011, IEEE Transactions on Affective Computing.
[26] J. Russell,et al. Evidence for a three-factor theory of emotions , 1977 .
[27] E. N. Sokolov,et al. Habituation of phasic and tonic components of the orienting reflex. , 1993, International journal of psychophysiology : official journal of the International Organization of Psychophysiology.
[28] Thierry Pun,et al. DEAP: A Database for Emotion Analysis ;Using Physiological Signals , 2012, IEEE Transactions on Affective Computing.
[29] Thierry Pun,et al. Multimodal Emotion Recognition in Response to Videos , 2012, IEEE Transactions on Affective Computing.
[30] Maja Pantic,et al. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING , 2022 .
[31] Mohammad Soleymani,et al. Continuous emotion detection in response to music videos , 2011, Face and Gesture 2011.
[32] Björn W. Schuller,et al. AVEC 2012: the continuous audio/visual emotion challenge , 2012, ICMI '12.
[33] J. Wolpaw,et al. EMG contamination of EEG: spectral and topographical characteristics , 2003, Clinical Neurophysiology.
[34] Athanasia Zlatintsi,et al. A supervised approach to movie emotion tracking , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[35] Hatice Gunes,et al. Output-associative RVM regression for dimensional and continuous emotion prediction , 2011, Face and Gesture 2011.
[36] Daniel Jonathan McDuff,et al. Crowdsourcing affective responses for predicting media effectiveness , 2014 .
[37] Daniel McDuff,et al. Crowdsourcing Facial Responses to Online Videos , 2012, IEEE Transactions on Affective Computing.
[38] K. Scherer. What are emotions? And how can they be measured? , 2005 .
[39] A. Schaefer,et al. Please Scroll down for Article Cognition & Emotion Assessing the Effectiveness of a Large Database of Emotion-eliciting Films: a New Tool for Emotion Researchers , 2022 .
[40] Björn W. Schuller,et al. Abandoning emotion classes - towards continuous emotion recognition with modelling of long-range dependencies , 2008, INTERSPEECH.
[41] Mohamed Chetouani,et al. Robust continuous prediction of human emotions using multiscale dynamic cues , 2012, ICMI '12.
[42] Romit Roy Choudhury,et al. Your reactions suggest you liked the movie: automatic content rating via reaction sensing , 2013, UbiComp.
[43] C. Granger. Investigating causal relations by econometric models and cross-spectral methods , 1969 .
[44] Chih-Jen Lin,et al. LIBLINEAR: A Library for Large Linear Classification , 2008, J. Mach. Learn. Res..
[45] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[46] Daniel McDuff,et al. Predicting online media effectiveness based on smile responses gathered over the Internet , 2013, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).