Recognizing induced emotions of movie audiences: Are induced and perceived emotions the same?
暂无分享,去创建一个
Thierry Pun | Johanna D. Moore | Guillaume Chanel | Michal Muszynski | Catherine Lai | Johanna D. Moore | Theodoros Kostoulas | Leimin Tian | Patrizia Lombardo | T. Pun | G. Chanel | Michal Muszynski | Catherine Lai | Patrizia Lombardo | Theodoros Kostoulas | Leimin Tian
[1] William M. Campbell,et al. Multi-Modal Audio, Video and Physiological Sensor Learning for Continuous Emotion Prediction , 2016, AVEC@ACM Multimedia.
[2] Emmanuel Dellandréa,et al. Affective Video Content Analysis: A Multidisciplinary Insight , 2018, IEEE Transactions on Affective Computing.
[3] Mingxing Xu,et al. THU-HCSI at MediaEval 2016: Emotional Impact of Movies Task , 2016, MediaEval.
[4] Yan Liu,et al. Mining Emotional Features of Movies , 2016, MediaEval.
[5] Johanna D. Moore,et al. Recognizing emotions in spoken dialogue with hierarchically fused acoustic and lexical features , 2016, 2016 IEEE Spoken Language Technology Workshop (SLT).
[6] Mohammad Soleymani,et al. Affective ranking of movie scenes using physiological signals and content analysis , 2008, MS '08.
[7] A. Gabrielsson. Emotion perceived and emotion felt: Same or different? , 2001 .
[8] Dylan M. Jones,et al. Refining the measurement of mood: The UWIST Mood Adjective Checklist , 1990 .
[9] Marko Robnik-Sikonja,et al. Theoretical and Empirical Analysis of ReliefF and RReliefF , 2003, Machine Learning.
[10] Riccardo Leonardi,et al. A Connotative Space for Supporting Movie Affective Recommendation , 2011, IEEE Transactions on Multimedia.
[11] Qin Jin,et al. RUC at MediaEval 2016 Emotional Impact of Movies Task: Fusion of Multimodal Features , 2016, MediaEval.
[12] P. Lachenbruch. Statistical Power Analysis for the Behavioral Sciences (2nd ed.) , 1989 .
[13] Peter Y. K. Cheung,et al. Affective Level Video Segmentation by Utilizing the Pleasure-Arousal-Dominance Information , 2008, IEEE Transactions on Multimedia.
[14] Björn Schuller,et al. Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.
[15] Emmanuel Dellandréa,et al. The MediaEval 2016 Emotional Impact of Movies Task , 2016, MediaEval.
[16] Erik Cambria,et al. A review of affective computing: From unimodal analysis to multimodal fusion , 2017, Inf. Fusion.
[17] Emmanuel Dellandréa,et al. LIRIS-ACCEDE: A Video Database for Affective Content Analysis , 2015, IEEE Transactions on Affective Computing.
[18] E. Tan. Film-induced affect as a witness emotion , 1995 .
[19] Guillaume Chanel,et al. Identifying aesthetic highlights in movies from clustering of physiological and behavioral signals , 2015, 2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX).
[20] Emmanuel Dellandréa,et al. Continuous Arousal Self-assessments Validation Using Real-time Physiological Responses , 2015, ASM@ACM Multimedia.
[21] Björn W. Schuller,et al. The INTERSPEECH 2010 paralinguistic challenge , 2010, INTERSPEECH.
[22] Carl Plantinga,et al. Art Moods and Human Moods in Narrative Cinema , 2012 .
[23] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[24] Mohammad Soleymani,et al. Automatic Violence Scenes Detection: A multi-modal approach , 2011, MediaEval.
[25] A. Hanjalic,et al. Extracting moods from pictures and sounds: towards truly personalized TV , 2006, IEEE Signal Processing Magazine.
[26] Jorma Laaksonen,et al. Content-Based Prediction of Movie Style, Aesthetics, and Affect: Data Set and Baseline Experiments , 2014, IEEE Transactions on Multimedia.
[27] Amy Beth Warriner,et al. Norms of valence, arousal, and dominance for 13,915 English lemmas , 2013, Behavior Research Methods.
[28] Julio Sánchez-Meca,et al. Meta-analysis in psychological research. , 2010 .
[29] Jacob Cohen. Statistical Power Analysis for the Behavioral Sciences , 1969, The SAGE Encyclopedia of Research Design.
[30] L. Lin,et al. A concordance correlation coefficient to evaluate reproducibility. , 1989, Biometrics.
[31] Gregory V. Bard,et al. Spelling-Error Tolerant, Order-Independent Pass-Phrases via the Damerau-Levenshtein String-Edit Distance Metric , 2007, ACSW.
[32] Alan Hanjalic,et al. Affective video content representation and modeling , 2005, IEEE Transactions on Multimedia.
[33] Fan Zhang,et al. BUL in MediaEval 2016 Emotional Impact of Movies Task , 2016, MediaEval.
[34] Luca Maria Gambardella,et al. Deep, Big, Simple Neural Nets for Handwritten Digit Recognition , 2010, Neural Computation.
[35] Nello Cristianini,et al. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods , 2000 .
[36] K. Kallinen,et al. Emotion perceived and emotion felt: Same and different , 2006 .
[37] Johanna D. Moore,et al. Emotion recognition in spontaneous and acted dialogues , 2015, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII).
[38] Kathrin Knautz,et al. Collective indexing of emotions in videos , 2011, J. Documentation.
[39] Guillaume Chanel,et al. Dynamic Time Warping of Multimodal Signals for Detecting Highlights in Movies , 2015, INTERPERSONAL@ICMI.
[40] Leontios J. Hadjileontiadis,et al. AUTH-SGP in MediaEval 2016 Emotional Impact of Movies Task , 2016, MediaEval.
[41] Y. Song,et al. Perceived and Induced Emotion Responses to Popular Music: Categorical and Dimensional Models , 2016 .
[42] Carlos Busso,et al. IEMOCAP: interactive emotional dyadic motion capture database , 2008, Lang. Resour. Evaluation.
[43] Emmanuel Dellandréa,et al. Deep learning vs. kernel methods: Performance for emotion prediction in videos , 2015, ACII.