Customizing by doing for responsive video game characters

This paper presents a game in which players can customize the behavior of their characters using their own movements while playing the game. Players' movements are recorded with a motion capture system. The player then labels the movements and uses them as input to a machine learning algorithm that generates a responsive behavior model. This interface supports a more embodied approach to character design that we call ''Customizing by Doing''. We present a user study which shows that using their own movements made the users feel more engaged with the game and the design process, due in large part to a feeling of personal ownership of the movement.

[1]  Paul Dourish,et al.  Where the action is , 2001 .

[2]  Nadia Bianchi-Berthouze,et al.  Does Body Movement Engage You More in Digital Game Play? and Why? , 2007, ACII.

[3]  I. Kononenko,et al.  INDUCTION OF DECISION TREES USING RELIEFF , 1995 .

[4]  Jehee Lee,et al.  Precomputing avatar behavior from human motion data , 2004, SCA '04.

[5]  Scott R. Klemmer,et al.  How bodies matter: five themes for interaction design , 2006, DIS '06.

[6]  Perry R. Cook,et al.  Real-time human interaction with supervised learning algorithms for music composition and performance , 2011 .

[7]  Michael Neff,et al.  Interactive editing of motion style using drives and correlations , 2009, SCA '09.

[8]  Jerry Alan Fails,et al.  Interactive machine learning , 2003, IUI '03.

[9]  Caroline Hummels,et al.  Move to get moved: a search for methods, tools and knowledge to design for expressive and rich movement-based interaction , 2007, Personal and Ubiquitous Computing.

[10]  James A. Landay,et al.  Examining Difficulties Software Developers Encounter in the Adoption of Statistical Machine Learning , 2008, AAAI.

[11]  Stacy Marsella,et al.  Tears and fears: modeling emotions and emotional behaviors in synthetic agents , 2001, AGENTS '01.

[12]  Vassilis Moustakis Do People in HCI Use Machine Learning? , 1997, HCI.

[13]  Geoffrey E. Hinton,et al.  Factored conditional restricted Boltzmann Machines for modeling motion style , 2009, ICML '09.

[14]  Hannes Högni Vilhjálmsson,et al.  Augmenting Online Conversation through Automated Discourse Tagging , 2005, Proceedings of the 38th Annual Hawaii International Conference on System Sciences.

[15]  Han Noot,et al.  Variations in gesturing and speech by GESTYLE , 2005, Int. J. Hum. Comput. Stud..

[16]  Marco Gillies,et al.  Learning Finite-State Machine Controllers From Motion Capture Data , 2009, IEEE Transactions on Computational Intelligence and AI in Games.

[17]  S. Levine,et al.  Gesture controllers , 2010, ACM Trans. Graph..

[18]  Mel Slater,et al.  Building Expression into Virtual Characters , 2006, Eurographics.

[19]  Desney S. Tan,et al.  Interactive optimization for steering machine classification , 2010, CHI.

[20]  Desney S. Tan,et al.  EnsembleMatrix: interactive visualization to support machine learning with multiple classifiers , 2009, CHI.

[21]  Mitsuru Ishizuka,et al.  MPML3D: Scripting Agents for the 3D Internet , 2011, IEEE Transactions on Visualization and Computer Graphics.

[22]  Perry R. Cook,et al.  Human model evaluation in interactive supervised learning , 2011, CHI.

[23]  Taesoo Kwon,et al.  Two-Character Motion Analysis and Synthesis , 2008, IEEE Transactions on Visualization and Computer Graphics.