Interactive Demonstration of Pointing Gestures for Virtual Trainers

While interactive virtual humans are becoming widely used in education, training and delivery of instructions, building the animations required for such interactive characters in a given scenario remains a complex and time consuming work. One of the key problems is that most of the systems controlling virtual humans are mainly based on pre-defined animations which have to be re-built by skilled animators specifically for each scenario. In order to improve this situation this paper proposes a framework based on the direct demonstration of motions via a simplified and easy to wear set of motion capture sensors. The proposed system integrates motion segmentation, clustering and interactive motion blending in order to enable a seamless interface for programming motions by demonstration.

[1]  Eiichi Yoshida,et al.  On human motion imitation by humanoid robot , 2008, 2008 IEEE International Conference on Robotics and Automation.

[2]  Thomas Rist,et al.  What Are They Going to Talk About ? Towards Life-Like Characters that Reflect on Interactions with Users , 2002 .

[3]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[4]  Monica N. Nicolescu,et al.  Natural methods for robot task learning: instructive demonstrations, generalization and practice , 2003, AAMAS '03.

[5]  Stefan Schaal,et al.  http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained , 2007 .

[6]  Stefan Kopp,et al.  Model-based animation of co-verbal gesture , 2002, Proceedings of Computer Animation 2002 (CA 2002).

[7]  Francis K. H. Quek,et al.  Hand gesture symmetric behavior detection and analysis in natural conversation , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.

[8]  Norman I. Badler,et al.  Design of a Virtual Human Presenter , 2000, IEEE Computer Graphics and Applications.

[9]  S. Kita Pointing: Where language, culture, and cognition meet , 2003 .

[10]  Wojciech Matusik,et al.  Practical motion capture in everyday surroundings , 2007, ACM Trans. Graph..

[11]  R. Amit,et al.  Learning movement sequences from demonstration , 2002, Proceedings 2nd International Conference on Development and Learning. ICDL 2002.

[12]  Mark A Bedau,et al.  Artificial life: more than just building and studying computational systems. , 2005, Artificial life.

[13]  Stacy Marsella,et al.  SmartBody: behavior realization for embodied conversational agents , 2008, AAMAS.

[14]  Andrew J. Hanson,et al.  Visualizing quaternions , 2005, SIGGRAPH Courses.

[15]  Jessica K. Hodgins,et al.  Action capture with accelerometers , 2008, SCA '08.

[16]  Aude Billard,et al.  Learning human arm movements by imitation: : Evaluation of a biologically inspired connectionist architecture , 2000, Robotics Auton. Syst..

[17]  Monica N. Nicolescu,et al.  Robot learning by demonstration using forward models of schema-based behaviors , 2005, ICINCO.

[18]  Tomohiko Mukai,et al.  Geostatistical motion interpolation , 2005, SIGGRAPH '05.

[19]  Andrew J. Hanson Visualizing Quaternions (The Morgan Kaufmann Series in Interactive 3D Technology) , 2006 .

[20]  Marcelo Kallmann,et al.  Analytical inverse kinematics with body posture control , 2008, Comput. Animat. Virtual Worlds.

[21]  Sotaro Kita,et al.  Pointing: A foundational building block in human communication , 2003 .

[22]  Cynthia Breazeal,et al.  Learning From and About Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots , 2005, Artificial Life.