Modeling Human Behavior for Virtual Training Systems

Constructing highly realistic agents is essential if agents are to be employed in virtual training systems. In training for collaboration based on face-to-face interaction, the generation of emotional expressions is one key. In training for guidance based on one-to-many interaction such as direction giving for evacuations, emotional expressions must be supplemented by diverse agent behaviors to make the training realistic. To reproduce diverse behavior, we characterize agents by using a various combinations of operation rules instantiated by the user operating the agent. To accomplish this goal, we introduce a user modeling method based on participatory simulations. These simulations enable us to acquire information observed by each user in the simulation and the operating history. Using these data and the domain knowledge including known operation rules, we can generate an explanation for each behavior. Moreover, the application of hypothetical reasoning, which offers consistent selection of hypotheses, to the generation of explanations allows us to use otherwise incompatible operation rules as domain knowledge. In order to validate the proposed modeling method, we apply it to the acquisition of an evacuee's model in a fire-drill experiment. We successfully acquire a subject's model corresponding to the results of an interview with the subject.

[1]  James C. Lester,et al.  Animated Pedagogical Agents: Face-to-Face Interaction in Interactive Learning Environments , 2000 .

[2]  Jeff Rickel,et al.  Virtual Humans for Team Training in Virtual Reality , 1999 .

[3]  Catherine Pelachaud,et al.  Embodied contextual agent in information delivering application , 2002, AAMAS '02.

[4]  Randy Goebel,et al.  Theorist: A Logical Reasoning System for Defaults and Diagnosis , 1987 .

[5]  Stacy Marsella,et al.  A step toward irrationality: using emotion to change belief , 2002, AAMAS '02.

[6]  John E. Laird,et al.  Variability in Human Behavior Modeling for Military Simulations , 2003 .

[7]  Toru Ishida Q: A Scenario Description Language for Interactive Agents , 2002, Computer.

[8]  Gita Reese Sukthankar,et al.  Modeling physical capabilities of humanoid agents using motion capture data , 2004, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004..

[9]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[10]  Toru Ishida,et al.  Scenario description for multi-agent simulation , 2003, AAMAS '03.

[11]  Justine Cassell,et al.  Human conversation as a system framework: designing embodied conversational agents , 2001 .

[12]  Marc Cavazza,et al.  Interacting with virtual characters in interactive storytelling , 2002, AAMAS '02.

[13]  David R. Traum,et al.  Embodied agents for multi-party dialogue in immersive virtual worlds , 2002, AAMAS '02.

[14]  Hideyuki Nakanishi,et al.  FreeWalk: a social interaction platform for group behaviour in a virtual space , 2004, Int. J. Hum. Comput. Stud..

[15]  Toshio Sugiman,et al.  Development of a new evacuation method for emergencies: Control of collective behavior by emergent small groups. , 1988 .

[16]  James C. Lester,et al.  Deictic and emotive communication in animated pedagogical agents , 2001 .