Modeling Emotional State and Personality for Conversational Agents

We describe an architecture for constructing a character-based agent based on speech and graphical interactions. The architecture uses models of emotions and personality encoded as Bayesian networks to 1) diagnose the emotions and personality of the user, and 2) generate appropriate behavior by an automated agent in response to the user’s input. Classes of interaction that are interpreted and/or generated include such things as ¯ Word choice and syntactic framing of utterances, ¯ Speech pace, rhythm, and pitch contour, and ¯ Gesture, expression, and body language. In particular, we describe the structure of the Bayesian networks that form the basis for the interpretation and generation. We discuss the effects of alternative formulations on assessment and inference. and personality of the user and respond appropriately [Picard, 1995, Reeves and Nass, 1995]. Research has shown that users respond emotionally to their computers. Emotion and personality are of interest to us primarily because of the ways in which they influence behavior, and precisely because those behaviors are communicative- in human dialogues they establish a channel of social interaction that is crucial to the smoothness and effectiveness of the conversation. In order to be an effective communicant, a computer character needs to respond appropriately to these signals from the user and should produce its own emotional signals that reinforce, rather than confuse, its intended communication.