Predicting Latent Narrative Mood Using Audio and Physiologic Data
暂无分享,去创建一个
[1] Johannes Wagner,et al. From Physiological Signals to Emotions: Implementing and Comparing Selected Methods for Feature Extraction and Classification , 2005, 2005 IEEE International Conference on Multimedia and Expo.
[2] Shashidhar G. Koolagudi,et al. Emotion recognition from speech: a review , 2012, International Journal of Speech Technology.
[3] Eric R. Ziegel,et al. The Elements of Statistical Learning , 2003, Technometrics.
[4] Dong Yu,et al. Speech emotion recognition using deep neural network and extreme learning machine , 2014, INTERSPEECH.
[5] Björn W. Schuller,et al. Context-sensitive multimodal emotion recognition from speech and facial expression using bidirectional LSTM modeling , 2010, INTERSPEECH.
[6] Jessica J Cameron,et al. Don’t You Know How Much I Need You? , 2010 .
[7] R. Plutchik. Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice , 2016 .
[8] A. Schuler,et al. Social challenges and supports from the perspective of individuals with Asperger syndrome and other autism spectrum disabilities , 2008, Autism : the international journal of research and practice.
[9] Björn Schuller,et al. Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.
[10] Sidney K. D'Mello,et al. Consistent but modest: a meta-analysis on unimodal and multimodal affect detection accuracies from 30 studies , 2012, ICMI '12.
[11] Björn W. Schuller,et al. Deep neural networks for acoustic emotion recognition: Raising the benchmarks , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[12] Björn W. Schuller,et al. Timing levels in segment-based speech emotion recognition , 2006, INTERSPEECH.
[13] Jason Williams,et al. Emotion Recognition Using Bio-sensors: First Steps towards an Automatic System , 2004, ADS.
[14] Kristen A. Lindquist,et al. Opinion TRENDS in Cognitive Sciences Vol.11 No.8 Cognitive-emotional interactions Language as context for the , 2022 .
[15] B. Mesquita,et al. Context in Emotion Perception , 2011 .
[16] Andrea Esuli,et al. SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining , 2006, LREC.
[17] Rafael A. Calvo,et al. Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications , 2010, IEEE Transactions on Affective Computing.
[18] V. Aubergé,et al. Multimodal Indices to Japanese and French Prosodically Expressed Social Affects , 2009, Language and speech.
[19] Friedhelm Schwenker,et al. Multimodal Emotion Classification in Naturalistic User Behavior , 2011, HCI.
[20] Roddy Cowie,et al. Emotional speech: Towards a new generation of databases , 2003, Speech Commun..
[21] Fakhri Karray,et al. Survey on speech emotion recognition: Features, classification schemes, and databases , 2011, Pattern Recognit..
[22] Yanning Zhang,et al. Hybrid Deep Neural Network--Hidden Markov Model (DNN-HMM) Based Speech Emotion Recognition , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.
[23] Björn W. Schuller,et al. LSTM-Modeling of continuous emotions in an audiovisual affect recognition framework , 2013, Image Vis. Comput..
[24] Björn W. Schuller,et al. The INTERSPEECH 2009 emotion challenge , 2009, INTERSPEECH.
[25] Christos D. Katsis,et al. Toward Emotion Recognition in Car-Racing Drivers: A Biosignal Processing Approach , 2008, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.
[26] Zhihong Zeng,et al. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..
[27] Razvan Pascanu,et al. Combining modality specific deep neural networks for emotion recognition in video , 2013, ICMI '13.