Adding the Emotional Dimension to Scripting Character Dialogues

We present an extension of the CrossTalk system that allows to model emotional behaviour on three levels: scripting, processing and expression. CrossTalk is a self-explaining virtual character exhibition for public spaces. Its SceneMaker authoring suite provides authors with a screenplay-like language for scripting character and user interactions. This article presents an extension to the original CrossTalk scripting language by providing a set of appraisal and dialogue act tags, making emotional behaviour generation possible. These extensions rely on CrossTalk’s new EmotionEngine which processes emotions by computing and maintaining emotional states for each character. In combination with the ContextMemory module it enables the characters to adapt to user feedback and to react to previous encounters with users in an emotional way. We describe the use of the appraisal and dialogue act tags, their processing in the EmotionEngine and their impact on the characters’ verbal and non-verbal expressive behaviour.

[1]  Andrew Ortony,et al.  The Cognitive Structure of Emotions , 1988 .

[2]  Brenda Laurel,et al.  Computers as theatre , 1991 .

[3]  R. McCrae,et al.  An introduction to the five-factor model and its applications. , 1992, Journal of personality.

[4]  Joseph Bates,et al.  The role of emotion in believable agents , 1994, CACM.

[5]  P. Petta,et al.  Creating Personalities for Synthetic Actors: Towards Autonomous Personality Agents , 1997 .

[6]  Barbara Hayes-Roth,et al.  Story-marking with improvisational puppets , 1997, AGENTS '97.

[7]  Robert Trappl,et al.  Creating Personalities for Synthetic Actors , 1997, Lecture Notes in Computer Science.

[8]  Barbara S. Page Hamlet on the Holodeck: The Future of Narrative in Cyberspace , 1999 .

[9]  Ana Paiva,et al.  Affective Interactions: Toward a New Generation of Computer Interfaces? , 2000, IWAI.

[10]  Thomas Rist,et al.  Presenting through performing: on the use of multiple lifelike characters in knowledge-based presentation systems , 2000, IUI '00.

[11]  R. Andre,et al.  Presenting through Performing: On the Use of Multiple Animated Characters in Knowledge-Based Present , 2000 .

[12]  Mitsuru Ishizuka,et al.  MPML: A Multimodal Presentation Markup Language with Character Agent Control Functions , 2000, WebNet.

[13]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[14]  Randall W. Hill,et al.  Toward the holodeck: integrating graphics, sound, character and story , 2001, AGENTS '01.

[15]  Michael Kipp,et al.  ANVIL - a generic annotation tool for multimodal dialogue , 2001, INTERSPEECH.

[16]  Catherine Pelachaud,et al.  Behavior Planning for a Reflexive Agent , 2001, IJCAI.

[17]  Marc Schröder,et al.  Emotional speech synthesis: a review , 2001, INTERSPEECH.

[18]  Andrew Marriott,et al.  VHML – Uncertainties and Problems . A discussion ... , 2002 .

[19]  Mitsuru Ishizuka,et al.  SCREAM: scripting emotion-based agent minds , 2002, AAMAS '02.

[20]  Thomas Rist,et al.  Staging exhibitions: methods and tools for modelling narrative structure to produce interactive performances with virtual actors , 2003, Virtual Reality.

[21]  Thomas Rist,et al.  Authoring scenes for adaptive, interactive performances , 2003, AAMAS '03.

[22]  Justine Cassell,et al.  Virtual peers as partners in storytelling and literacy learning , 2003, J. Comput. Assist. Learn..