Motion Capture Based Animation for Virtual Human Demonstrators: Modeling, Parameterization and Planning

A huge collection of character animation techniques has been developed to date and impressive results have been achieved in the recent years. The main pursued approaches can be categorized as physics-based, algorithmic-based or data-based. High-quality animation today is still largely data-based and achieved through motion capture technologies. While great realism is achieved, current solutions still suffer from limited character control, limited ability to address cluttered environments, and disconnection from higher-level constraints and task-oriented specifications. This dissertation addresses these limitations and achieves an autonomous character that is able to demonstrate, instruct and deliver information to observers in a realistic and human-like way.The first part of this thesis addresses motion synthesis with a simple example-based motion parameterization algorithm for satisfying generic spatial constraints at interactive frame rates. The approach directly optimizes blending weights for a consistent set of example motions, until the specified constraints are best met. An in-depth analysis is presented to compare the proposed approach with three other popular blending techniques, and the pros and cons of each method are uncovered. The algorithm has also been integrated in an immersive motion modeling platform, which enables programming of generic actions by direct demonstration of example motions.In order to address actions in cluttered environments and maintain the realism of motion capture examples, the concept of exploring the blending space of example motions in then introduced. A bidirectional time-synchronized sampling-based planner with lazy collision evaluation is proposed for planning motion variations around obstacles while maintaining the original quality of the example motions. Coupled with a locomotion planner, it generates realistic whole-body motion in cluttered environments.Finally, high-level specifications for demonstrative actions are addressed with the proposed whole-body PLACE planner. It is based on coordination models extracted from behavioral studies, where participants performed demonstrations involving locomotion and pointing in varied conditions. The planner achieves coordinated body positioning, locomotion, action execution and gaze synthesis, in order to engage observers in demonstrative scenarios.

[1]  Michael Gleicher More Motion Capture in Games - Can We Make Example-Based Approaches Scale? , 2008, MIG.

[2]  Demetri Terzopoulos,et al.  Autonomous pedestrians , 2007, Graph. Model..

[3]  Philippe Beaudoin,et al.  Synthesis of constrained walking skills , 2008, SIGGRAPH Asia '08.

[4]  Tomohiko Mukai,et al.  Geostatistical motion interpolation , 2005, SIGGRAPH '05.

[5]  Dinesh Manocha,et al.  A hybrid approach for simulating human motion in constrained environments , 2010, Comput. Animat. Virtual Worlds.

[6]  Michael Gleicher,et al.  Automated extraction and parameterization of motions in large data sets , 2004, SIGGRAPH 2004.

[7]  Victor B. Zordan,et al.  Mapping optical motion capture data to skeletal motion using a physical model , 2003, SCA '03.

[8]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH Classes.

[9]  Manfred Lau,et al.  Precomputed search trees: planning for interactive goal-driven animation , 2006, SCA '06.

[10]  Tamim Asfour,et al.  An integrated approach to inverse kinematics and path planning for redundant manipulators , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[11]  D. Pélisson,et al.  Vestibuloocular reflex inhibition and gaze saccade control characteristics during eye-head orientation in humans. , 1988, Journal of neurophysiology.

[12]  Marcelo Kallmann,et al.  Feature-Based Locomotion with Inverse Branch Kinematics , 2011, MIG.

[13]  Satoshi Kagami,et al.  An adaptive action model for legged navigation planning , 2007, 2007 7th IEEE-RAS International Conference on Humanoid Robots.

[14]  Sanjeev Khanna,et al.  Automatic construction of a minimum size motion graph , 2009, SCA '09.

[15]  Marcelo Kallmann,et al.  A skill-based motion planning framework for humanoids , 2010, 2010 IEEE International Conference on Robotics and Automation.

[16]  T Sinokrot,et al.  Workspace zone differentiation and visualization for virtual humans , 2008, Ergonomics.

[17]  Harry Shum,et al.  Motion texture: a two-level statistical model for character motion synthesis , 2002, ACM Trans. Graph..

[18]  Stéphane Donikian,et al.  Experiment-based modeling, simulation and validation of interactions between virtual walkers , 2009, SCA '09.

[19]  David A. Forsyth,et al.  Generalizing motion edits with Gaussian processes , 2009, ACM Trans. Graph..

[20]  Karl F. MacDorman,et al.  The Uncanny Valley [From the Field] , 2012, IEEE Robotics Autom. Mag..

[21]  Dinesh Manocha,et al.  Global vector field computation for feedback motion planning , 2009, 2009 IEEE International Conference on Robotics and Automation.

[22]  Peter-Pike J. Sloan,et al.  Artist‐Directed Inverse‐Kinematics Using Radial Basis Function Interpolation , 2001, Comput. Graph. Forum.

[23]  Michael A. Arbib,et al.  Perceptual Structures and Distributed Motor Control , 1981 .

[24]  A. Scheflen,et al.  Human territories : how we behave in space-time , 1976 .

[25]  Matthew Stone,et al.  Speaking with hands: creating animated conversational characters from recordings of human performance , 2004, ACM Trans. Graph..

[26]  Masayuki Inaba,et al.  Motion Planning for Humanoid Robots , 2003, ISRR.

[27]  Hannes Högni Vilhjálmsson,et al.  SPONTANEOUS AVATAR BEHAVIOR FOR HUMAN TERRITORIALITY , 2010, Appl. Artif. Intell..

[28]  Jean-Paul Laumond,et al.  A motion capture‐based control‐space approach for walking mannequins , 2006, Comput. Animat. Virtual Worlds.

[29]  Jehee Lee,et al.  Synchronized multi-character motion editing , 2009, ACM Trans. Graph..

[30]  Yukiko I. Nakano,et al.  Estimating user's engagement from eye-gaze behaviors in human-agent conversations , 2010, IUI '10.

[31]  Alexis Héloir,et al.  EMBR: A realtime animation engine for interactive embodied agents , 2009, ACII.

[32]  Masayuki Nakajima,et al.  Database guided computer animation of human grasping using forward and inverse kinematics , 1999, Comput. Graph..

[33]  Eiichi Yoshida,et al.  On human motion imitation by humanoid robot , 2008, 2008 IEEE International Conference on Robotics and Automation.

[34]  Monica N. Nicolescu,et al.  Natural methods for robot task learning: instructive demonstrations, generalization and practice , 2003, AAMAS '03.

[35]  William H. Press,et al.  Numerical Recipes 3rd Edition: The Art of Scientific Computing , 2007 .

[36]  Sergey Levine,et al.  Continuous character control with low-dimensional embeddings , 2012, ACM Trans. Graph..

[37]  Jean-Paul Laumond,et al.  Robot Motion Planning and Control , 1998 .

[38]  Maxim Likhachev,et al.  Search-based planning for manipulation with motion primitives , 2010, 2010 IEEE International Conference on Robotics and Automation.

[39]  Ari Shapiro,et al.  Building a Character Animation System , 2011, MIG.

[40]  Petros Faloutsos,et al.  Interactive motion correction and object manipulation , 2007, SI3D.

[41]  Daniel Thalmann,et al.  Using an Intermediate Skeleton and Inverse Kinematics for Motion Retargeting , 2000, Comput. Graph. Forum.

[42]  David A. Forsyth,et al.  Motion synthesis from annotations , 2003, ACM Trans. Graph..

[43]  A. Kendon Some functions of gaze-direction in two-person conversation , 1977 .

[44]  G.E. Moore,et al.  Cramming More Components Onto Integrated Circuits , 1998, Proceedings of the IEEE.

[45]  Daniel Thalmann,et al.  Robust on-line adaptive footplant detection and enforcement for locomotion , 2006, The Visual Computer.

[46]  Frédéric H. Pighin,et al.  Hybrid control for interactive character animation , 2003, 11th Pacific Conference onComputer Graphics and Applications, 2003. Proceedings..

[47]  James J. Kuffner,et al.  Randomized statistical path planning , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[48]  Rainer Stiefelhagen,et al.  Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures , 2004, ICMI '04.

[49]  Lance Williams,et al.  Motion signal processing , 1995, SIGGRAPH.

[50]  Francis K. H. Quek,et al.  Hand gesture symmetric behavior detection and analysis in natural conversation , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.

[51]  Jean-Claude Latombe,et al.  Planning motions with intentions , 1994, SIGGRAPH.

[52]  Andrew J. Hanson Visualizing Quaternions (The Morgan Kaufmann Series in Interactive 3D Technology) , 2006 .

[53]  John P. Lewis,et al.  Automated eye motion using texture synthesis , 2005, IEEE Computer Graphics and Applications.

[54]  Marcelo Kallmann,et al.  Analytical inverse kinematics with body posture control , 2008, Comput. Animat. Virtual Worlds.

[55]  Héctor H. González-Baños,et al.  Multi-modal Motion Planning for a Humanoid Robot Manipulation Task , 2007, ISRR.

[56]  Yuyu Xu,et al.  An example-based motion synthesis technique for locomotion and object manipulation , 2012, I3D '12.

[57]  Andrew T. Duchowski,et al.  Hybrid image-/model-based gaze-contingent rendering , 2007, APGV.

[58]  Aaron Hertzmann,et al.  Active learning for real-time motion controllers , 2007, SIGGRAPH 2007.

[59]  Carlo Camporesi,et al.  Interactive Motion Modeling and Parameterization by Direct Demonstration , 2010, IVA.

[60]  D Guitton,et al.  Central Organization and Modeling of Eye‐Head Coordination during Orienting Gaze Shifts a , 1992, Annals of the New York Academy of Sciences.

[61]  Michael Gleicher,et al.  Retargetting motion to new characters , 1998, SIGGRAPH.

[62]  Teenie Matlock,et al.  Gesture variants and cognitive constraints for interactive virtual reality training systems , 2011, IUI '11.

[63]  Philippe Lefèvre,et al.  Experimental study and modeling of vestibulo-ocular reflex modulation during large shifts of gaze in humans , 2004, Experimental Brain Research.

[64]  D. Thalmann,et al.  Virtual Humans’ Behaviour: Individuals, Groups, and Crowds , 2000 .

[65]  Carlo Camporesi,et al.  Immersive Interfaces for Building Parameterized Motion Databases , 2012, IVA.

[66]  Takeo Kanade,et al.  Footstep Planning for the Honda ASIMO Humanoid , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[67]  Sergey Levine,et al.  Gesture controllers , 2010, SIGGRAPH 2010.

[68]  Aaron Hertzmann,et al.  Style-based inverse kinematics , 2004, ACM Trans. Graph..

[69]  Jean-Claude Latombe,et al.  Interactive manipulation planning for animated characters , 2000, Proceedings the Eighth Pacific Conference on Computer Graphics and Applications.

[70]  Lucas Kovar,et al.  Footskate cleanup for motion capture editing , 2002, SCA '02.

[71]  Timothy Bretl,et al.  Non-gaited humanoid locomotion planning , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[72]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents: Research Articles , 2004 .

[73]  Sung Yong Shin,et al.  Computer puppetry: An importance-based approach , 2001, TOGS.

[74]  Jessica K. Hodgins,et al.  Action capture with accelerometers , 2008, SCA '08.

[75]  M. Inaba,et al.  Humanoid arm motion planning using stereo vision and RRT search , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[76]  Q. Shi,et al.  Gaussian Process Latent Variable Models for , 2011 .

[77]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[78]  Cynthia Breazeal,et al.  Learning From and About Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots , 2005, Artificial Life.

[79]  Dennis Reidsma,et al.  Elckerlyc - A BML Realizer for continuous, multimodal interaction with a Virtual Human , 2009 .

[80]  Lucas Kovar,et al.  Splicing Upper‐Body Actions with Locomotion , 2006, Comput. Graph. Forum.

[81]  Marcelo Kallmann,et al.  Planning humanlike actions in blending spaces , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[82]  Tido Röder,et al.  Efficient content-based retrieval of motion capture data , 2005, SIGGRAPH 2005.

[83]  Hyun Joon Shin,et al.  Snap-together motion: assembling run-time animations , 2003, I3D '03.

[84]  Jean-Claude Latombe,et al.  Robot motion planning , 1970, The Kluwer international series in engineering and computer science.

[85]  Jovan Popović,et al.  Mesh-based inverse kinematics , 2005, SIGGRAPH 2005.

[86]  A. Kendon,et al.  Organization of behavior in face-to-face interaction , 1975 .

[87]  Bilge Mutlu,et al.  A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[88]  M. V. D. Panne,et al.  SIMBICON: simple biped locomotion control , 2007, SIGGRAPH 2007.

[89]  Christoph Bregler,et al.  Turning to the masters: motion capturing cartoons , 2002, ACM Trans. Graph..

[90]  Jessica K. Hodgins,et al.  Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces , 2004, SIGGRAPH 2004.

[91]  Marcelo Kallmann,et al.  Learning humanoid reaching tasks in dynamic environments , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[92]  I. Scott MacKenzie,et al.  Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts' law research in HCI , 2004, Int. J. Hum. Comput. Stud..

[93]  Marcelo Kallmann,et al.  Interactive Demonstration of Pointing Gestures for Virtual Trainers , 2009, HCI.

[94]  Lydia E. Kavraki,et al.  Probabilistic roadmaps for path planning in high-dimensional configuration spaces , 1996, IEEE Trans. Robotics Autom..

[95]  Zoran Popovic,et al.  Composite control of physically simulated characters , 2011, TOGS.

[96]  Ken-ichi Anjyo,et al.  Fourier principles for emotion-based human figure animation , 1995, SIGGRAPH.

[97]  Kathleen E Cullen,et al.  The brain stem saccadic burst generator encodes gaze in three-dimensional space. , 2008, Journal of neurophysiology.

[98]  Victor B. Zordan,et al.  Dynamic response for motion capture animation , 2005, SIGGRAPH '05.

[99]  C. Karen Liu,et al.  Optimization-based interactive motion synthesis , 2009, ACM Trans. Graph..

[100]  Marcelo Kallmann,et al.  Motion Parameterization with Inverse Blending , 2010, MIG.

[101]  A. Pease,et al.  Body Language: How to Read Others''Thoughts by Their Gestures (Overcoming Common Problems). Sheldon , 1981 .

[102]  Hannes Högni Vilhjálmsson,et al.  Where to Sit? The Study and Implementation of Seat Selection in Public Places , 2011, IVA.

[103]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[104]  Ronan Boulic,et al.  Bringing the human arm reachable space to a virtual environment for its analysis , 2003, 2003 International Conference on Multimedia and Expo. ICME '03. Proceedings (Cat. No.03TH8698).

[105]  Aaron Powers,et al.  Matching robot appearance and behavior to tasks to improve human-robot cooperation , 2003, The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003..

[106]  Sung Yong Shin,et al.  On-line locomotion generation based on motion blending , 2002, SCA '02.

[107]  Aude Billard,et al.  Learning human arm movements by imitation: : Evaluation of a biologically inspired connectionist architecture , 2000, Robotics Auton. Syst..

[108]  Brent Lance,et al.  Real-time expressive gaze animation for virtual humans , 2009, AAMAS.

[109]  Wojciech Matusik,et al.  Practical motion capture in everyday surroundings , 2007, SIGGRAPH 2007.

[110]  Carol O'Sullivan,et al.  Seeing is believing: body motion dominates in multisensory conversations , 2010, SIGGRAPH 2010.

[111]  Sergey Levine,et al.  Space-time planning with parameterized locomotion controllers , 2011, TOGS.

[112]  J. Hodgins,et al.  Reducing the search space for physically realistic human motion synthesis , 2006 .

[113]  Michael Neff,et al.  Modeling tension and relaxation for computer animation , 2002, SCA '02.

[114]  Timothy Bretl,et al.  Using Motion Primitives in Probabilistic Sample-Based Planning for Humanoid Robots , 2008, WAFR.

[115]  Katsu Yamane,et al.  Synthesizing animations of human manipulation tasks , 2004, SIGGRAPH 2004.

[116]  Monica N. Nicolescu,et al.  Robot learning by demonstration using forward models of schema-based behaviors , 2005, ICINCO.

[117]  Michael Gleicher,et al.  Parametric motion graphs , 2007, SI3D.

[118]  Jean-Claude Latombe,et al.  Multi-modal Motion Planning in Non-expansive Spaces , 2010, Int. J. Robotics Res..

[119]  Jessica K. Hodgins,et al.  Motion capture-driven simulations that hit and react , 2002, SCA '02.

[120]  Taku Komura,et al.  Spatial relationship preserving character motion adaptation , 2010, SIGGRAPH 2010.

[121]  Sung Yong Shin,et al.  Planning biped locomotion using motion capture data and probabilistic roadmaps , 2003, TOGS.

[122]  Timothy Bretl,et al.  Motion Planning of Multi-Limbed Robots Subject to Equilibrium Constraints: The Free-Climbing Robot Problem , 2006, Int. J. Robotics Res..

[123]  Stephen John Cockcroft,et al.  An evaluation of inertial motion capture technology for use in the analysis and optimization of road cycling kinematics , 2011 .

[124]  Jean-Paul Laumond,et al.  Animation planning for virtual characters cooperation , 2006, TOGS.

[125]  Thomas Rist,et al.  What Are They Going to Talk About ? Towards Life-Like Characters that Reflect on Interactions with Users , 2002 .

[126]  Michael Neff,et al.  Towards Natural Gesture Synthesis: Evaluating Gesture Units in a Data-Driven Approach to Gesture Synthesis , 2007, IVA.

[127]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[128]  Jessica K. Hodgins,et al.  Construction and optimal search of interpolated motion graphs , 2007, ACM Trans. Graph..

[129]  Victor Ng-Thow-Hing,et al.  Randomized multi-modal motion planning for a humanoid robot manipulation task , 2011, Int. J. Robotics Res..

[130]  D. Thalmann,et al.  Planning collision-free reaching motions for interactive object manipulation and grasping , 2008, SIGGRAPH '08.

[131]  Victor B. Zordan,et al.  Momentum control for balance , 2009, SIGGRAPH 2009.

[132]  Kwang-Jin Choi,et al.  On-line motion retargetting , 1999, Proceedings. Seventh Pacific Conference on Computer Graphics and Applications (Cat. No.PR00293).

[133]  Brent Lance,et al.  Emotionally Expressive Head and Body Movement During Gaze Shifts , 2007, IVA.

[134]  Radoslaw Niewiadomski,et al.  Greta: an interactive expressive ECA system , 2009, AAMAS.

[135]  Kathleen E Cullen,et al.  Time course of vestibuloocular reflex suppression during gaze shifts. , 2004, Journal of neurophysiology.

[136]  Ming C. Lin,et al.  Motion planning and autonomy for virtual humans , 2008, SIGGRAPH '08.

[137]  C. Karen Liu,et al.  Momentum-based parameterization of dynamic character motion , 2004, SCA '04.

[138]  Teenie Matlock,et al.  Modeling Gaze Behavior for Virtual Demonstrators , 2011, IVA.

[139]  Steven M. LaValle,et al.  RRT-connect: An efficient approach to single-query path planning , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[140]  Stefan Schaal,et al.  http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained , 2007 .

[141]  Bobby Bodenheimer,et al.  Computing the duration of motion transitions: an empirical approach , 2004, SCA '04.

[142]  A. Kendon Conducting Interaction: Patterns of Behavior in Focused Encounters , 1990 .

[143]  Manfred Lau,et al.  Behavior planning for character animation , 2005, SCA '05.

[144]  Justine Cassell,et al.  BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.

[145]  Hans-Peter Seidel,et al.  Performance capture from sparse multi-view video , 2008, ACM Trans. Graph..

[146]  Dimitris N. Metaxas,et al.  Automating gait generation , 2001, SIGGRAPH.

[147]  Victor Ng-Thow-Hing,et al.  Toward Interactive Reaching in Static Environments for Humanoid Robots , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[148]  S. LaValle Rapidly-exploring random trees : a new tool for path planning , 1998 .

[149]  Christoph Bregler,et al.  Motion capture assisted animation: texturing and synthesis , 2002, ACM Trans. Graph..

[150]  Marcelo Kallmann Shortest paths with arbitrary clearance from navigation meshes , 2010, SCA '10.

[151]  Bruno Arnaldi,et al.  Morphology‐independent representation of motions for interactive human‐like animation , 2005, Comput. Graph. Forum.

[152]  Stacy Marsella,et al.  SmartBody: behavior realization for embodied conversational agents , 2008, AAMAS.

[153]  Thomas Hulin,et al.  Workspace comparisons of setup configurations for human-robot interaction , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[154]  Chris Hecker,et al.  Real-time motion retargeting to highly varied user-created morphologies , 2008, SIGGRAPH 2008.

[155]  Hannes Högni Vilhjálmsson,et al.  Social Perception and Steering for Online Avatars , 2008, IVA.

[156]  Sung Yong Shin,et al.  A Coordinate-Invariant Approach to Multiresolution Motion Analysis , 2001, Graph. Model..

[157]  Jinxiang Chai,et al.  Synthesis and editing of personalized stylistic human motion , 2010, I3D '10.

[158]  Taesoo Kwon,et al.  Motion modeling for on-line locomotion synthesis , 2005, SCA '05.

[159]  Hans-Peter Seidel,et al.  Annotated New Text Engine Animation Animation Lexicon Animation Gesture Profiles MR : . . . JL : . . . Gesture Generation Video Annotated Gesture Script , 2007 .

[160]  Nadia Magnenat-Thalmann,et al.  Personalised real-time idle motion synthesis , 2004, 12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings..