Judging Emotion from Low-Pass Filtered Naturalistic Emotional Speech
暂无分享,去创建一个
[1] Sarah Jane Delany,et al. A Crowdsourcing Approach to Labelling a Mood Induced Speech Corpus , 2012 .
[2] K. Scherer,et al. Minimal cues in the vocal communication of affect: Judging emotions from content-masked speech , 1972, Journal of psycholinguistic research.
[3] K. Scherer,et al. Vocal cues to deception: A comparative channel approach , 1985, Journal of psycholinguistic research.
[4] Sarah Harkness,et al. Targeted high and low speech frequency bands to right and left ears respectively improve task performance and perceived sociability in dyadic conversations , 2009, Laterality.
[5] L. Nygaard,et al. Communicating emotion: linking affective prosody and word meaning. , 2008, Journal of experimental psychology. Human perception and performance.
[6] B. Moore,et al. Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. , 2001, The Journal of the Acoustical Society of America.
[7] Jack J. Jiang,et al. Effects of low-pass filtering on acoustic analysis of voice. , 2011, Journal of voice : official journal of the Voice Foundation.
[8] C. Cullen,et al. Generation of High Quality Audio Natural Emotional Speech Corpus using Task Based Mood Induction , 2006 .
[9] B. Vaughan,et al. Naturalistic Emotional Speech Corpora with Large Scale Emotional Dimension Ratings , 2011 .
[10] Paul Boersma,et al. Praat, a system for doing phonetics by computer , 2002 .
[11] George N. Votsis,et al. Emotion recognition in human-computer interaction , 2001, IEEE Signal Process. Mag..
[12] N. Amir,et al. EFFECTS OF RANDOM SPLICING ON LISTENERS' PERCEPTIONS , 2007 .
[13] Gerardo Hermosillo,et al. Learning From Crowds , 2010, J. Mach. Learn. Res..
[14] C. Lorenzi,et al. Effects of lowpass and highpass filtering on the intelligibility of speech based on temporal fine structure or envelope cues , 2010, Hearing Research.
[15] Daniel Hirst,et al. Detecting changes in key and range for the automatic modelling and coding of intonation , 2008, Speech Prosody 2008.
[16] Carlos Busso,et al. IEMOCAP: interactive emotional dyadic motion capture database , 2008, Lang. Resour. Evaluation.
[17] Tracey M. Derwing,et al. Detection of nonnative speaker status from content-masked speech , 2010, Speech Commun..
[18] Marc Schröder,et al. Dimensional Emotion Representation as a Basis for Speech Synthesis with Non-extreme Emotions , 2004, ADS.
[19] Christian Lorenzi,et al. The ability of listeners to use recovered envelope cues from speech fine structure. , 2006, The Journal of the Acoustical Society of America.
[20] Klaus Krippendorff,et al. Computing Krippendorff's Alpha-Reliability , 2011 .
[21] Jonghwa Kim,et al. Bimodal Emotion Recognition using Speech and Physiological Changes , 2007 .
[22] P. Boersma. Praat : doing phonetics by computer (version 5.1.05) , 2009 .
[23] Maria Uther,et al. Accepted Manuscript Running Head: Effects of Filtered Speech on Affect , 2022 .
[24] M. Otto,et al. The voice of emotional memory: content-filtered speech in panic disorder, social phobia, and major depressive disorder. , 2001, Behaviour research and therapy.
[25] B. Moore,et al. Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. , 2001, The Journal of the Acoustical Society of America.