Semi-Autonomous Avatars: A New Direction for Expressive User Embodiment

Computer animated characters are rapidly becoming a regular part of our lives. They are starting to take the place of actors in films and television and are now an integral part of most computer games. Perhaps most interestingly in on-line games and chat rooms they are representing the user visually in the form of avatars, becoming our on-line identities, our embodiments in a virtual world. Currently online environments such as “Second Life” are being taken up by people who would not traditionally have considered playing games before, largely due to a greater emphasis on social interaction. These environments require avatars that are more expressive and that can make on-line social interactions seem more like face-to-face conversations. Computer animated characters come in many different forms. Film characters require a substantial amount of off-line animator effort to achieve high levels of quality; these techniques are not suitable for real time applications and are not the focus of this chapter. Non-player characters (typically the bad guys) in games use limited artificial intelligence to react autonomously to events in real time. However avatars are completely controlled by their users, reacting to events solely through user commands. This chapter will discuss the distinction between fully autonomous characters and completely controlled avatars and how the current differentiation may no longer be useful, given that avatar technology may need to include more autonomy to live up to the demands of mass appeal. We will firstly discuss the two categories and present reasons to combine them. We will then describe previous work in this area and finally present our own framework for semi-autonomous avatars.

[1]  Justine Cassell,et al.  BodyChat: autonomous communicative behaviors in avatars , 1998, AGENTS '98.

[2]  Neil A. Dodgson,et al.  Eye movements and attention for behavioural animation , 2002, Comput. Animat. Virtual Worlds.

[3]  Kristinn R. Thórisson,et al.  Real-time decision making in multimodal face-to-face communication , 1998, AGENTS '98.

[4]  Mel Slater,et al.  An Eye Gaze Model for Dyadic Interaction in an Immersive Virtual Environment: Practice and Experience , 2004, Comput. Graph. Forum.

[5]  Daniel Thalmann,et al.  Virtual Humanoids: Let Them be Autonomous without Losing Control , 2000 .

[6]  Catherine Pelachaud,et al.  Subtleties of facial expressions in embodied agents , 2002, Comput. Animat. Virtual Worlds.

[7]  K. Chang,et al.  Embodiment in conversational interfaces: Rea , 1999, CHI '99.

[8]  Demetri Terzopoulos,et al.  Artificial fishes: Autonomous locomotion, perception, behavior, and learning in a simulated physical world , 1994 .

[9]  Allen Newell,et al.  SOAR: An Architecture for General Intelligence , 1987, Artif. Intell..

[10]  Michael Mateas,et al.  Not your Grandmother's Game: AI-Based Art and Entertainment , 1999 .

[11]  Ana Paiva,et al.  The child behind the character , 2001, IEEE Trans. Syst. Man Cybern. Part A.

[12]  Ruth Aylett,et al.  Towards autonomous characters for interactive media , 2002 .

[13]  Lili Cheng,et al.  Lessons learned: building and deploying shared virtual environments , 2002 .

[14]  Dave Cliff,et al.  Creatures: Entertainment Software Agents with Artificial Life , 2004, Autonomous Agents and Multi-Agent Systems.

[15]  Marc Cavazza,et al.  “Situated AI” in Video Games: Integrating NLP, Path Planning and 3D Animation , 1999 .

[16]  Marco Gillies,et al.  Integrating autonomous behavior and user control for believable agents , 2004, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004..

[17]  Mel Slater,et al.  The impact of eye gaze on communication using humanoid avatars , 2001, CHI.

[18]  Norman I. Badler,et al.  Where to Look? Automating Attending Behaviors of Virtual Human Characters , 1999, Agents.

[19]  Li Zhang,et al.  eDrama: Facilitating online role-play using emotionally expressive avatars , 2007 .

[20]  Michael Gleicher,et al.  Comparing Constraint-Based Motion Editing Methods , 2001, Graph. Model..

[22]  Justine Cassell,et al.  BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.

[23]  Bruce Blumberg,et al.  Multi-level direction of autonomous creatures for real-time virtual environments , 1995, SIGGRAPH.

[24]  Norman I. Badler,et al.  A Parameterized Action Representation for Virtual Human Agents , 1998 .

[25]  Hannes Högni Vilhjálmsson,et al.  Augmenting Online Conversation through Automated Discourse Tagging , 2005, Proceedings of the 38th Annual Hawaii International Conference on System Sciences.

[26]  Stacy Marsella,et al.  Interactive pedagogical drama , 2000, AGENTS '00.

[27]  Catherine Pelachaud,et al.  Context sensitive faces , 1997, AVSP.

[28]  Justine Cassell,et al.  Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous , 1999, Autonomous Agents and Multi-Agent Systems.

[29]  Mel Slater,et al.  Building Expression into Virtual Characters , 2006, Eurographics.

[30]  M. F. P. Gillies,et al.  Practical behavioural animation based on vision and attention , 2001 .

[31]  Randall W. Hill,et al.  Toward a New Generation of Virtual Humans for Interactive Experiences , 2002, IEEE Intell. Syst..

[32]  Soraia Raupp Musse,et al.  Guiding and Interacting with Virtual Crowds in Real-time , 1999 .

[33]  Marco Gillies Applying Direct Manipulation Interfaces to Customizing Player Character Behaviour , 2006, ICEC.