Empathy can be viewed in largely cognitive terms as the ability to do role taking, specifically to perceive, imagine and take on the psychological point of view of another [Piaget, 1965]. It also can be seen in more affective terms, specifically to react emotionally to another’s emotional state. [e.g., Stotland, 1978]. In either case, revealing a person’s, or a virtual character’s, internal state is an important aspect of inducing an empathic response in another. This paper considers the problem of designing a model of expressive gaze manner for virtual characters. By expressive gaze manner, we mean how what a person is thinking and feeling is conveyed through the physical manner by which they gaze. For example, angry glares, gapes, stares, furtive glances and peeks are different in their physical properties and in what they reveal about the person gazing. A model of expressive gaze needs to describe how to change the physical properties of a character’s gaze shift in order to exploit this ability to express, and not just describe when and where the agent should gaze. Ultimately, the purpose of this model is to provide Embodied Conversational Agents (ECA) with expressive gaze. This paper describes an exploratory study that is a first step in collecting the data required to build a model of expressive gaze. We extracted gaze data from Computer Graphic (CG) animated motion pictures and performed a preliminary analysis of a portion of this data. Animated films are of particular interest here given the animator’s skill in creating obviously artificial characters that nevertheless evoke emotion and empathy in the audience.
[1]
K. Scherer,et al.
Handbook of affective sciences.
,
2003
.
[2]
W. Lewis Johnson,et al.
Animated Agents for Procedural Training in Virtual Reality: Perception, Cognition, and Motor Control
,
1999,
Appl. Artif. Intell..
[3]
P. Bentler.
MULTIVARIATE ANALYSIS WITH LATENT VARIABLES: CAUSAL MODELING
,
1980
.
[4]
Ezra Stotland,et al.
Empathy Fantasy and Helping
,
1978
.
[5]
M. Argyle,et al.
Gaze and Mutual Gaze
,
1994,
British Journal of Psychiatry.
[6]
F. Thomas,et al.
Disney Animation: The Illusion of Life
,
1981
.
[7]
Christopher E. Peters,et al.
Bottom-up visual attention for virtual human animation
,
2003,
Proceedings 11th IEEE International Workshop on Program Comprehension.
[8]
C. Elliott.
The affective reasoner: a process model of emotions in a multi-agent system
,
1992
.
[9]
Donald W. Fiske,et al.
Face-to-face interaction: Research, methods, and theory
,
1977
.
[10]
A. Kendon.
Conducting Interaction: Patterns of Behavior in Focused Encounters
,
1990
.
[11]
R. Brown.
Social Psychology: The Second Edition
,
1986
.
[12]
M. Schoen.
The Moral Judgment of the Child.
,
1933
.
[13]
Norman I. Badler,et al.
Where to Look? Automating Attending Behaviors of Virtual Human Characters
,
1999,
AGENTS '99.