Could Function-Specific Prosodic Cues Be Used As a Basis for Non-Speech User Interface Sound Design?

It is widely accepted that the nonverbal parts of vocal expression perform very important functions in vocal communication. Certain acoustic qualities in a vocal utterance can effectively communicate one’s emotions and intentions to another person. This study examines the possibilities of using such prosodic qualities of vocal expressions (in human interaction) in order to design effective nonspeech user interface sounds. In an empirical setting, utterances with four context-situated communicative functions were gathered from 20 participants. Time series of fundamental frequency (F0) and intensity were extracted from the utterances and analysed statistically. Results show that individual communicative functions have distinct prosodic characteristics in respect of pitch contour and intensity. This implies that function-specific prosodic cues can be imitated in the design of communicative interface sounds for the corresponding functions in human-computer interaction.

[1]  K. Scherer,et al.  Emotion Inferences from Vocal Expression Correlate Across Languages and Cultures , 2001 .

[2]  A. Fernald Human maternal vocalizations to infants as biologically relevant signals: An evolutionary perspective. , 1992 .

[3]  Paul Boersma,et al.  Praat, a system for doing phonetics by computer , 2002 .

[4]  Klaus R. Scherer,et al.  Feelings and Emotions: Feelings Integrate the Central Representation of Appraisal-driven Response Organization in Emotion , 2004 .

[5]  K. Scherer Vocal affect expression: a review and a model for future research. , 1986, Psychological bulletin.

[6]  Jörn Hurtienne,et al.  Towards a unified view of intuitive interaction: definitions, models and tools across the world , 2007, MMI Interakt..

[7]  William W. Gaver Auditory Icons: Using Sound in Computer Interfaces , 1986, Hum. Comput. Interact..

[8]  Meera Blattner,et al.  Earcons and Icons: Their Structure and Common Design Principles , 1989, Hum. Comput. Interact..

[9]  J. Mazziotta,et al.  Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[10]  Barry Arons,et al.  The future of speech and audio in the interface: a CHI '94 workshop , 1994, SGCH.

[11]  Antti Pirhonen,et al.  Same sound – Different meanings : A Novel Scheme for Modes of Listening , 2010 .

[12]  A. Fernald Approval and disapproval: infant responsiveness to vocal affect in familiar and unfamiliar languages. , 1993, Child development.

[13]  P. Laukka,et al.  Communication of emotions in vocal expression and music performance: different channels, same code? , 2003, Psychological bulletin.

[14]  K. Scherer,et al.  Acoustic profiles in vocal emotion expression. , 1996, Journal of personality and social psychology.

[15]  K. Scherer,et al.  Are facial expressions of emotion produced by categorical affect programs or dynamically driven by appraisal? , 2007, Emotion.

[16]  J. Mazziotta,et al.  Grasping the Intentions of Others with One's Own Mirror Neuron System , 2005, PLoS biology.