E-Drama: Facilitating Online Role-play using an AI Actor and Emotionally Expressive Characters

This paper describes a multi-user role-playing environment, referred to as "e-drama", which enables groups of people to converse online, in scenario driven virtual environments. The starting point of this research, is an existing application known as "edrama", a 2D graphical environment in which users are represented by static cartoon figures. Tools have been developed to enable integration of the existing edrama application with several new components to support avatars with emotionally expressive behaviours, rendered in a 3D environment. The functionality includes the extraction of affect from open-ended improvisational text. The results of the affective analysis are then used to: (a) control an automated improvisational AI actor - EMMA (emotion, metaphor and affect) that operates a bit-part character in the improvisation; (b) drive the animations of avatars using the Demeanour framework in the user interface so that they react bodily in ways that are consistent with the affect that they are expressing. Finally, we describe user trials that demonstrate that the changes made improve the quality of social interaction and users' sense of presence. Moreover, our system has the potential to evolve normal classroom education for young people with or without learning disabilities by providing 24/7 efficient personalised social skill, language and career training via role-play and offering automatic monitoring.

[1]  Justine Cassell,et al.  BodyChat: autonomous communicative behaviors in avatars , 1998, AGENTS '98.

[2]  Neil A. Dodgson,et al.  Eye movements and attention for behavioural animation , 2002, Comput. Animat. Virtual Worlds.

[3]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[4]  Serge Sharoff,et al.  How to handle lexical semantics in SFL: a corpus study of purposes for using size adjectives , 2006 .

[5]  Yukiko I. Nakano,et al.  MACK: Media lab Autonomous Conversational Kiosk , 2002 .

[6]  Mel Slater,et al.  An Eye Gaze Model for Dyadic Interaction in an Immersive Virtual Environment: Practice and Experience , 2004, Comput. Graph. Forum.

[7]  Catherine Pelachaud,et al.  Subtleties of facial expressions in embodied agents , 2002, Comput. Animat. Virtual Worlds.

[8]  Nadia Magnenat-Thalmann,et al.  A Model for Personality and Emotion Simulation , 2003, KES.

[9]  Marco Gillies,et al.  Integrating autonomous behavior and user control for believable agents , 2004, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004..

[10]  Kristinn R. Thórisson,et al.  The Power of a Nod and a Glance: Envelope Vs. Emotional Feedback in Animated Conversational Agents , 1999, Appl. Artif. Intell..

[11]  Jaime G. Carbonell,et al.  Interactive drama, art and artificial intelligence , 2002 .

[12]  Stacy Marsella,et al.  A domain-independent framework for modeling emotion , 2004, Cognitive Systems Research.

[13]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents , 2004, Comput. Animat. Virtual Worlds.

[14]  R. McCrae,et al.  An introduction to the five-factor model and its applications. , 1992, Journal of personality.

[15]  Norman I. Badler,et al.  Where to Look? Automating Attending Behaviors of Virtual Human Characters , 1999, Agents.

[16]  Norman I. Badler,et al.  Eyes alive , 2002, ACM Trans. Graph..

[17]  Li Zhang,et al.  A 'companion' ECA with planning and activity modelling , 2008, AAMAS.

[18]  D. R. Heise,et al.  Semantic di erential profiles for 1000 most frequent English words , 1965 .

[19]  James C. Lester,et al.  Integrating Affective Computing Into Animated Tutoring Agents , 1997 .

[20]  Andrew Ortony,et al.  The Cognitive Structure of Emotions , 1988 .

[21]  John A. Barnden,et al.  Exploitation in Affect Detection in Open-Ended Improvisational Text , 2006 .

[22]  John A. Barnden,et al.  An Improvisational AI Agent and Emotionally Expressive Characters , 2007 .

[23]  Ruth Aylett,et al.  Unscripted narrative for affectively driven characters , 2005, IEEE Computer Graphics and Applications.

[24]  Mel Slater,et al.  The impact of eye gaze on communication using humanoid avatars , 2001, CHI.

[25]  Maurizio Mancini,et al.  Implementing Expressive Gesture Synthesis for Embodied Conversational Agents , 2005, Gesture Workshop.

[26]  N. Badler,et al.  Eyes Alive Eyes Alive Eyes Alive Figure 1: Sample Images of an Animated Face with Eye Movements , 2022 .

[27]  Marco Gillies,et al.  A Model of Interpersonal Attitude and Posture Generation , 2003, IVA.

[28]  M. Argyle,et al.  Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.

[29]  Brent Lance,et al.  The Rickel Gaze Model: A Window on the Mind of a Virtual Human , 2007, IVA.

[30]  Justine Cassell,et al.  Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous , 1999, Autonomous Agents and Multi-Agent Systems.

[31]  Anthony C. Boucouvalas,et al.  Text-to-Emotion Engine for Real Time Internet Communication , 2002 .

[32]  Andrea Esuli,et al.  Determining Term Subjectivity and Term Orientation for Opinion Mining , 2006, EACL.

[33]  Mel Slater,et al.  Building Expression into Virtual Characters , 2006, Eurographics.

[34]  A. Wallington,et al.  Varieties and Directions of Interdomain Influence in Metaphor , 2004 .

[35]  Norman I. Badler,et al.  The EMOTE model for effort and shape , 2000, SIGGRAPH.

[36]  Susan R. Fussell,et al.  Figurative Language in Emotional Communication , 1998 .

[37]  El Jed Mehdi Modelling character emotion in an interactive virtual environment , 2007 .

[38]  Ana Paiva,et al.  An Affectively Driven Planner for Synthetic Characters , 2006, ICAPS.

[39]  Brian Magerko,et al.  AI Characters and Directors for Interactive Computer Games , 2004, AAAI.

[40]  Christiane Fellbaum,et al.  Book Reviews: WordNet: An Electronic Lexical Database , 1999, CL.

[41]  Mel Slater,et al.  Spatial Social Behavior in Second Life , 2007, IVA.

[42]  Gert Pfurtscheller,et al.  Analysis of Physiological Responses to a Social Situation in an Immersive Virtual Environment , 2006, PRESENCE: Teleoperators and Virtual Environments.

[43]  Neil A. Dodgson,et al.  Integrating Internal Behavioural Models with External Expression , 2002 .

[44]  E. Vesterinen,et al.  Affective Computing , 2009, Encyclopedia of Biometrics.

[45]  R. Craggs,et al.  A two dimensional annotation scheme for emotion in dialogue , 2004 .

[46]  Mel Slater,et al.  The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment , 2003, CHI '03.

[47]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[48]  Lucas Kovar,et al.  Splicing Upper‐Body Actions with Locomotion , 2006, Comput. Graph. Forum.

[49]  Mitsuru Ishizuka,et al.  Simulating Affective Communication with Animated Agents , 2001, INTERACT.

[50]  Joseph Weizenbaum,et al.  ELIZA—a computer program for the study of natural language communication between man and machine , 1966, CACM.

[51]  Robert Michael Young,et al.  From linear story generation to branching story graphs , 2005, IEEE Computer Graphics and Applications.

[52]  P. Ekman Emotion in the human face , 1982 .

[53]  Harry Shum,et al.  Motion texture: a two-level statistical model for character motion synthesis , 2002, ACM Trans. Graph..

[54]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents: Research Articles , 2004 .

[55]  Jean Carletta,et al.  Assessing Agreement on Classification Tasks: The Kappa Statistic , 1996, CL.

[56]  Ted Briscoe,et al.  Robust Accurate Statistical Annotation of General Text , 2002, LREC.