Rd as a Control Parameter to Explore Affective Correlates of the Tense-Lax Continuum

This study uses the Rd glottal waveshape parameter to simulate the phonatory tense-lax continuum and to explore its affective correlates in terms of activation and valence. Based on a natural utterance which was inverse filtered and sourceparameterised, a range of synthesized stimuli varying along the tense-lax continuum were generated using Rd as a control parameter. Two additional stimuli were included, which were versions of the most lax stimuli with additional creak (laxcreaky voice). In a listening test, participants chose an emotion from a set of affective labels and indicated its perceived strength. They also indicated the naturalness of the stimulus and their confidence in their judgment. Results showed that stimuli at the tense end of the range were most frequently associated with angry, at the lax end of the range the association was with sad, and in the intermediate range, the association was with content. Results also indicate, as was found in our earlier work, that a particular stimulus can be associated with more than one affect. Overall these results show that Rd can be used as a single control parameter to generate variation along the tense-lax continuum of phonation.

[1]  Christer Gobl Modelling aspiration noise during phonation using the LF voice source model , 2006, INTERSPEECH.

[2]  Gunnar Fant,et al.  The voice source in connected speech , 1997, Speech Commun..

[3]  Ailbhe Ní Chasaide,et al.  The Digichaint Interactive Game as a Virtual Learning Environment for Irish. , 2016 .

[4]  Ailbhe Ní Chasaide,et al.  The relationship between voice source parameters and the maxima dispersion quotient (MDQ) , 2015, INTERSPEECH.

[5]  Roddy Cowie,et al.  Emotional speech: Towards a new generation of databases , 2003, Speech Commun..

[6]  D. Streiner,et al.  Health measurement scales , 2008 .

[7]  Ailbhe Ní Chasaide,et al.  Perceptual Salience of Voice Source Parameters in Signaling Focal Prominence , 2016, INTERSPEECH.

[8]  D. Klatt,et al.  Analysis, synthesis, and perception of voice quality variations among female and male talkers. , 1990, The Journal of the Acoustical Society of America.

[9]  C. Gobl,et al.  Expressive synthesis: how crucial is voice quality? , 2002, Proceedings of 2002 IEEE Workshop on Speech Synthesis, 2002..

[10]  Junichi Yamagishi,et al.  HMM-based speech synthesiser using the LF-model of the glottal source , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[11]  Klaus R. Scherer,et al.  Vocal communication of emotion: A review of research paradigms , 2003, Speech Commun..

[12]  Ailbhe Ní Chasaide,et al.  The role of voice quality in communicating emotion, mood and attitude , 2003, Speech Commun..

[13]  G. Fant Dept. for Speech, Music and Hearing Quarterly Progress and Status Report the Lf-model Revisited. Transformations and Frequency Domain Analysis the Lf-model Revisited. Transformations and Frequency Domain Analysis* , 2022 .

[14]  Ailbhe Ní Chasaide,et al.  Voice Quality Variation and the Perception of Affect: Continuous or Categorical? , 2003 .

[15]  Nick Campbell,et al.  A corpus-based speech synthesis system with emotion , 2003, Speech Commun..

[16]  Iain R. Murray,et al.  Toward the simulation of emotion in synthetic speech: a review of the literature on human vocal emotion. , 1993, The Journal of the Acoustical Society of America.

[17]  Heiga Zen,et al.  Constructing emotional speech synthesizers with limited speech database , 2004, INTERSPEECH.

[18]  Ailbhe Ní Chasaide,et al.  Voice Source Variation and Its Communicative Functions , 2010 .

[19]  K. Scherer,et al.  Mapping emotions into acoustic space: The role of voice production , 2011, Biological Psychology.

[20]  J. Liljencrants,et al.  Dept. for Speech, Music and Hearing Quarterly Progress and Status Report a Four-parameter Model of Glottal Flow , 2022 .

[21]  Eva Björkner,et al.  Interdependencies among Voice Source Parameters in Emotional Speech , 2011, IEEE Transactions on Affective Computing.

[22]  Ailbhe Ní Chasaide,et al.  Universal and Language-specific Perception of Affect from Voice , 2011, ICPhS.

[23]  Ailbhe Ní Chasaide,et al.  Chatbot Technology with Synthetic Voices in the Acquisition of an Endangered Language: Motivation, Development and Evaluation of a Platform for Irish , 2016, LREC.