Performance-based control interface for character animation

Most game interfaces today are largely symbolic, translating simplified input such as keystrokes into the choreography of full-body character movement. In this paper, we describe a system that directly uses human motion performance to provide a radically different, and much more expressive interface for controlling virtual characters. Our system takes a data feed from a motion capture system as input, and in real-time translates the performance into corresponding actions in a virtual world. The difficulty with such an approach arises from the need to manage the discrepancy between the real and virtual world, leading to two important subproblems 1) recognizing the user's intention, and 2) simulating the appropriate action based on the intention and virtual context. We solve this issue by first enabling the virtual world's designer to specify possible activities in terms of prominent features of the world along with associated motion clips depicting interactions. We then integrate the prerecorded motions with online performance and dynamic simulation to synthesize seamless interaction of the virtual character in a simulated virtual world. The result is a flexible interface through which a user can make freeform control choices while the resulting character motion maintains both physical realism and the user's personal style.

[1]  Mira Dontcheva,et al.  Layered acting for character animation , 2003, ACM Trans. Graph..

[2]  Zoran Popovic,et al.  Physically based motion transformation , 1999, SIGGRAPH.

[3]  Sung Yong Shin,et al.  A hierarchical approach to interactive motion editing for human-like figures , 1999, SIGGRAPH.

[4]  Dimitrios Gunopulos,et al.  Indexing Large Human-Motion Databases , 2004, VLDB.

[5]  Eugene Fiume,et al.  Interactive control for physically-based animation , 2000, SIGGRAPH.

[6]  Sung Yong Shin,et al.  Computer puppetry: An importance-based approach , 2001, TOGS.

[7]  Junji Yamato,et al.  Recognizing human action in time-sequential images using hidden Markov model , 1992, Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[8]  Tido Röder,et al.  Efficient content-based retrieval of motion capture data , 2005, SIGGRAPH 2005.

[9]  Jessica K. Hodgins,et al.  Performance animation from low-dimensional control signals , 2005, SIGGRAPH 2005.

[10]  Jessica K. Hodgins,et al.  Accelerometer-based user interfaces for the control of a physically simulated character , 2008, SIGGRAPH 2008.

[11]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH Classes.

[12]  Takeo Igarashi,et al.  Spatial keyframing for performance-driven animation , 2005, Symposium on Computer Animation.

[13]  Zoran Popovic,et al.  Motion warping , 1995, SIGGRAPH.

[14]  Alex Pentland,et al.  Space-time gestures , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[15]  Peng Zhao,et al.  User interfaces for interactive control of physics-based 3D characters , 2005, I3D '05.

[16]  Hans-Peter Seidel,et al.  Layered Performance Animation with Correlation Maps , 2007, Comput. Graph. Forum.

[17]  Dinesh K. Pai,et al.  FootSee: an interactive animation system , 2003, SCA '03.

[18]  Geoffrey E. Hinton,et al.  Local Physical Models for Interactive Character Animation , 2002, Comput. Graph. Forum.

[19]  Michiel van de Panne,et al.  Motion doodles: an interface for sketching character motion , 2004, SIGGRAPH 2004.

[20]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[21]  Aaron Hertzmann,et al.  Style-based inverse kinematics , 2004, SIGGRAPH 2004.

[22]  Michael Gleicher,et al.  Motion editing with spacetime constraints , 1997, SI3D.