Speech in Affective Computing
暂无分享,去创建一个
Shrikanth Narayanan | Carlos Busso | Angeliki Metallinou | Shrikanth S. Narayanan | Chi-Chun Lee | Sungbok Lee | Jangwon Kim | Jangwon Kim | Sungbok Lee | Chi-Chun Lee | C. Busso | A. Metallinou
[1] Donna Erickson,et al. Some articulatory measurements of real sadness , 2004, INTERSPEECH.
[2] M H Cohen,et al. Electromagnetic midsagittal articulometer systems for transducing speech articulatory movements. , 1992, The Journal of the Acoustical Society of America.
[3] Carlos Busso,et al. Analysis of Emotionally Salient Aspects of Fundamental Frequency for Emotion Detection , 2009, IEEE Transactions on Audio, Speech, and Language Processing.
[4] Panayiotis G. Georgiou,et al. Behavioral Signal Processing: Deriving Human Behavioral Informatics From Speech and Language , 2013, Proceedings of the IEEE.
[5] Angelika Königseder,et al. Walter de Gruyter , 2016 .
[6] Carlos Busso,et al. Interpreting ambiguous emotional expressions , 2009, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.
[7] Michael Kipp,et al. ANVIL - a generic annotation tool for multimodal dialogue , 2001, INTERSPEECH.
[8] Björn W. Schuller,et al. Abandoning emotion classes - towards continuous emotion recognition with modelling of long-range dependencies , 2008, INTERSPEECH.
[9] Björn Schuller,et al. Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.
[10] Roddy Cowie,et al. FEELTRACE: an instrument for recording perceived emotion in real time , 2000 .
[11] Carlos Busso,et al. IEMOCAP: interactive emotional dyadic motion capture database , 2008, Lang. Resour. Evaluation.
[12] Shrikanth Narayanan,et al. An approach to real-time magnetic resonance imaging for speech production. , 2003, The Journal of the Acoustical Society of America.
[13] Paul Boersma,et al. Praat, a system for doing phonetics by computer , 2002 .
[14] Shrikanth S. Narayanan,et al. A study of interplay between articulatory movement and prosodic characteristics in emotional speech production , 2010, INTERSPEECH.
[15] K. Scherer,et al. Vocal expression of affect , 2005 .
[16] Shrikanth S. Narayanan,et al. Toward detecting emotions in spoken dialogs , 2005, IEEE Transactions on Speech and Audio Processing.
[17] Björn W. Schuller,et al. Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification , 2012, IEEE Trans. Affect. Comput..
[18] Carlos Busso,et al. A personalized emotion recognition system using an unsupervised feature adaptation scheme , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[19] Klaus R. Scherer,et al. Vocal communication of emotion: A review of research paradigms , 2003, Speech Commun..
[20] Björn W. Schuller,et al. Paralinguistics in speech and language - State-of-the-art and the challenge , 2013, Comput. Speech Lang..
[21] Carlos Busso,et al. Exploring Cross-Modality Affective Reactions for Audiovisual Emotion Recognition , 2013, IEEE Transactions on Affective Computing.
[22] Shrikanth S. Narayanan,et al. An articulatory study of fricative consonants using magnetic resonance imaging , 1995 .
[23] Mark A. Hall,et al. Correlation-based Feature Selection for Machine Learning , 2003 .
[24] Zhihong Zeng,et al. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..
[25] Carlos Busso,et al. Iterative feature normalization for emotional speech detection , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[26] Zhigang Deng,et al. An acoustic study of emotions expressed in speech , 2004, INTERSPEECH.
[27] Ralph Arnote,et al. Hong Kong (China) , 1996, OECD/G20 Base Erosion and Profit Shifting Project.
[28] Peter Wittenburg,et al. ELAN: a Professional Framework for Multimodality Research , 2006, LREC.
[29] Athanasios Katsamanis,et al. Toward automating a human behavioral coding system for married couples' interactions using speech acoustic features , 2013, Speech Commun..
[30] D. Watson,et al. Development and validation of brief measures of positive and negative affect: the PANAS scales. , 1988, Journal of personality and social psychology.
[31] John H. L. Hansen,et al. Discrete-Time Processing of Speech Signals , 1993 .
[32] Ailbhe Ní Chasaide,et al. The role of voice quality in communicating emotion, mood and attitude , 2003, Speech Commun..
[33] Anne-Maria Laukkanen,et al. Electroglottogram Analysis of Emotionally Styled Phonation , 2009, COST 2102 School.
[34] Björn W. Schuller,et al. LSTM-Modeling of continuous emotions in an audiovisual affect recognition framework , 2013, Image Vis. Comput..
[35] Shrikanth S. Narayanan,et al. An articulatory study of emotional speech production , 2005, INTERSPEECH.
[36] Athanasios Katsamanis,et al. A hierarchical framework for modeling multimodality and emotional evolution in affective dialogs , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[37] Cairong Zou,et al. Speech emotion recognition using modified quadratic discrimination function , 2008 .
[38] Carlos Busso,et al. Modeling mutual influence of interlocutor emotion states in dyadic spoken interactions , 2009, INTERSPEECH.
[39] Shrikanth S. Narayanan,et al. A study of emotional speech articulation using a fast magnetic resonance imaging technique , 2006, INTERSPEECH.
[40] Athanasios Katsamanis,et al. Tracking continuous emotional trends of participants during affective dyadic interactions using body language and speech information , 2013, Image Vis. Comput..
[41] S Kiritani,et al. Computer controlled radiography for observation of movements of articulatory and other human organs. , 1973, Computers in biology and medicine.
[42] Shrikanth S. Narayanan,et al. Intoxicated speech detection: A fusion framework with speaker-normalized hierarchical functionals and GMM supervectors , 2014, Comput. Speech Lang..
[43] J. Bachorowski. Vocal Expression and Perception of Emotion , 1999 .
[44] Shrikanth S. Narayanan,et al. A Robust Unsupervised Arousal Rating Framework using Prosody with Cross-Corpora Evaluation , 2012, INTERSPEECH.
[45] Tsang-Long Pao,et al. A Comparative Study of Different Weighting Schemes on KNN-Based Emotion Recognition in Mandarin Speech , 2007, ICIC.
[46] Björn W. Schuller,et al. Comparing one and two-stage acoustic modeling in the recognition of emotion in speech , 2007, 2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU).
[47] Loïc Kessous,et al. The relevance of feature type for the automatic classification of emotional user states: low level descriptors and functionals , 2007, INTERSPEECH.
[48] Fuhui Long,et al. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy , 2003, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[49] M. Stone. A guide to analysing tongue motion from ultrasound images , 2005, Clinical linguistics & phonetics.
[50] Björn Schuller,et al. Emotion recognition in the noise applying large acoustic feature sets , 2006, Speech Prosody 2006.
[51] Gunnar Fant,et al. Acoustic Theory Of Speech Production , 1960 .
[52] Björn W. Schuller,et al. Frame vs. Turn-Level: Emotion Recognition from Speech Considering Static and Dynamic Processing , 2007, ACII.
[53] Carlos Busso,et al. Emotion recognition using a hierarchical binary decision tree approach , 2011, Speech Commun..
[54] Ian H. Witten,et al. The WEKA data mining software: an update , 2009, SKDD.
[55] Steve Young,et al. The HTK book , 1995 .
[56] Anil K. Jain,et al. Feature Selection: Evaluation, Application, and Small Sample Performance , 1997, IEEE Trans. Pattern Anal. Mach. Intell..
[57] Chih-Jen Lin,et al. LIBSVM: A library for support vector machines , 2011, TIST.
[58] Georges Quénot,et al. Recognizing emotions for the audio-visual document indexing , 2004, Proceedings. ISCC 2004. Ninth International Symposium on Computers And Communications (IEEE Cat. No.04TH8769).
[59] Shrikanth S. Narayanan,et al. An Exploratory Study of the Relations Between Perceived Emotion Strength and Articulatory Kinematics , 2011, INTERSPEECH.
[60] Shrikanth Narayanan,et al. Morphological variation in the adult hard palate and posterior pharyngeal wall. , 2013, Journal of speech, language, and hearing research : JSLHR.
[61] E. Ambikairajah,et al. Speaker Normalisation for Speech-Based Emotion Detection , 2007, 2007 15th International Conference on Digital Signal Processing.
[62] Ragini Verma,et al. Class-level spectral features for emotion recognition , 2010, Speech Commun..
[63] Björn W. Schuller,et al. Hidden Markov model-based speech emotion recognition , 2003, 2003 International Conference on Multimedia and Expo. ICME '03. Proceedings (Cat. No.03TH8698).
[64] Chloé Clavel,et al. Fear-type emotion recognition for future audio-based surveillance systems , 2008, Speech Commun..