EMOTION-SENSITIVE HUMAN-COMPUTER INTERFACES

People are polite to their computers. They are flattered by them, form teams with them and even interact emotionally with them. In their experiments, Reeves and Nass (The Media Equation, 1996) showed that humans impose their interpersonal behavioral patterns onto their computers. Thus, the design of humancomputer interfaces should reflect this observation in order to facilitate an effective communication. In order to build a human-computer interface that is sensitive to the user's expressed emotion, we investigated spectral, prosodic, and verbal cues in the user's utterance. Based on these cues, we showed that the classification system achieved accuracies comparable to human performance. Finally, we demonstrate how to integrate information about the expressed emotion into a dialog system. The dialog system employs different discourse strategies depending on the expressed emotion allowing for a natural and effective communication between the user and the system.

[1]  Sheldon B. Michaels,et al.  Some Aspects of Fundamental Frequency and Envelope Amplitude as Related to the Emotional Content of Speech , 1962 .

[2]  Klaus R. Scherer,et al.  Randomized splicing: A note on a simple technique for masking speech content. , 1971 .

[3]  R. Frick Communicating emotion: The role of prosodic features. , 1985 .

[4]  K. Scherer Vocal affect expression: a review and a model for future research. , 1986, Psychological bulletin.

[5]  Slava M. Katz,et al.  Estimation of probabilities from sparse data for the language model component of a speech recognizer , 1987, IEEE Trans. Acoust. Speech Signal Process..

[6]  Reinhard Fiehler Kommunikation und Emotion. Theoretische und empirische Untersuchungen zur Rolle von Emotionen in der verbalen Interaktion , 1990 .

[7]  Mark Steedman,et al.  Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents , 1994, SIGGRAPH.

[8]  Philip C. Woodland,et al.  Speaker adaptation of HMMs using linear regression , 1994 .

[9]  Ryohei Nakatsu,et al.  Life-like communication agent-emotion sensing character "MIC" and feeling session character "MUSE" , 1996, Proceedings of the Third IEEE International Conference on Multimedia Computing and Systems.

[10]  Ronald Rosenfeld,et al.  Using story topics for language model adaptation , 1997, EUROSPEECH.

[11]  Justine Cassell,et al.  An Architecture for Embodied Conversational Characters , 1998 .

[12]  N. Amir,et al.  Towards an automatic classification of emotions in speech , 1998, ICSLP.

[13]  Lori S. Levin,et al.  CLARITY: INFERRING DISCOURSE STRUCTURE FROM SPEECH , 2002 .