Towards Expressive Gaze Manner in Embodied Virtual Agents

Empathy can be viewed in largely cognitive terms as the ability to do role taking, specifically to perceive, imagine and take on the psychological point of view of another [Piaget, 1965]. It also can be seen in more affective terms, specifically to react emotionally to another’s emotional state. [e.g., Stotland, 1978]. In either case, revealing a person’s, or a virtual character’s, internal state is an important aspect of inducing an empathic response in another. This paper considers the problem of designing a model of expressive gaze manner for virtual characters. By expressive gaze manner, we mean how what a person is thinking and feeling is conveyed through the physical manner by which they gaze. For example, angry glares, gapes, stares, furtive glances and peeks are different in their physical properties and in what they reveal about the person gazing. A model of expressive gaze needs to describe how to change the physical properties of a character’s gaze shift in order to exploit this ability to express, and not just describe when and where the agent should gaze. Ultimately, the purpose of this model is to provide Embodied Conversational Agents (ECA) with expressive gaze. This paper describes an exploratory study that is a first step in collecting the data required to build a model of expressive gaze. We extracted gaze data from Computer Graphic (CG) animated motion pictures and performed a preliminary analysis of a portion of this data. Animated films are of particular interest here given the animator’s skill in creating obviously artificial characters that nevertheless evoke emotion and empathy in the audience.