Simulating interactions of avatars in high dimensional state space

Efficient computation of strategic movements is essential to control virtual avatars intelligently in computer games and 3D virtual environments. Such a module is needed to control non-player characters (NPCs) to fight, play team sports or move through a mass crowd. Reinforcement learning is an approach to achieve real-time optimal control. However, the huge state space of human interactions makes it difficult to apply existing learning methods to control avatars when they have dense interactions with other characters. In this research, we propose a new methodology to efficiently plan the movements of an avatar interacting with another. We make use of the fact that the subspace of meaningful interactions is much smaller than the whole state space of two avatars. We efficiently collect samples by exploring the subspace where dense interactions between the avatars occur and favor samples that have high connectivity with the other samples. Using the collected samples, a finite state machine (FSM) called Interaction Graph is composed. At run-time, we compute the optimal action of each avatar by minmax search or dynamic programming on the Interaction Graph. The methodology is applicable to control NPCs in fighting and ball-sports games.

[1]  Okan Arikan,et al.  Interactive motion generation from examples , 2002, ACM Trans. Graph..

[2]  David A. Forsyth,et al.  Learning to move autonomously in a hostile world , 2005, SIGGRAPH '05.

[3]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH Classes.

[4]  Hyun Joon Shin,et al.  Analysis and Synthesis of Interactive Two-Character Motions , 2004 .

[5]  Taesoo Kwon,et al.  Motion modeling for on-line locomotion synthesis , 2005, SCA '05.

[6]  Chris Watkins,et al.  Learning from delayed rewards , 1989 .

[7]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[8]  Sung Yong Shin,et al.  A hierarchical approach to interactive motion editing for human-like figures , 1999, SIGGRAPH.

[9]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[10]  Taku Komura,et al.  Simulating competitive interactions using singly captured motions , 2007, VRST '07.

[11]  Hyun Joon Shin,et al.  Snap-together motion: assembling run-time animations , 2003, ACM Trans. Graph..

[12]  Manfred Lau,et al.  Behavior planning for character animation , 2005, SCA '05.

[13]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[14]  Nancy S. Pollard,et al.  Responsive characters from motion fragments , 2007, SIGGRAPH 2007.

[15]  C. Karen Liu,et al.  Composition of complex optimal multi-character motions , 2006, SCA '06.

[16]  Jehee Lee,et al.  Precomputing avatar behavior from human motion data , 2004, SCA '04.

[17]  Michael Gleicher,et al.  Retargetting motion to new characters , 1998, SIGGRAPH.

[18]  Thore Graepel,et al.  LEARNING TO FIGHT , 2004 .

[19]  Mahesan Niranjan,et al.  On-line Q-learning using connectionist systems , 1994 .