A blendshape model for mapping facial motions to an android

An important part of natural, and therefore effective, communication is facial motion. The android Repliee Q2 should therefore display realistic facial motion. In computer graphics animation, such motion is created by mapping human motion to the animated character. This paper proposes a method for mapping human facial motion to the android. This is done using a linear model of the android, based on blendshape models used in computer graphics. The model is derived from motion capture of the android and therefore also models the android's physical limitations. The paper shows that the blendshape method can be successfully applied to the android. Also, it is shown that a linear model is sufficient for representing android facial motion, which means control can be very straightforward. Measurements of the produced motion identify the physical limitations of the android and allow identifying the main areas for improvement of the android design.

[1]  Paolo Dario,et al.  Effective emotional expressions with expression humanoid robot WE-4RII: integration of humanoid robot hand RCH-1 , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[2]  Hiroshi Ishiguro,et al.  Development of an android robot for studying human-robot interaction , 2004 .

[3]  Tsuhan Chen,et al.  Audio-visual integration in multimodal communication , 1998, Proc. IEEE.

[4]  Takashi Minato,et al.  Generating natural motion in an android by mapping human motion , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  H. McGurk,et al.  Hearing lips and seeing voices , 1976, Nature.

[6]  Karl F. MacDorman,et al.  Androids as an Experimental Apparatus: Why Is There an Uncanny Valley and Can We Exploit It? , 2005 .

[7]  Taichi Shiiba,et al.  Realization of realistic and rich facial expressions by face robot , 2004, IEEE Conference on Robotics and Automation, 2004. TExCRA Technical Exhibition Based..

[8]  John P. Lewis,et al.  Facial motion retargeting , 2006, SIGGRAPH Courses.

[9]  David Hanson Exploring the Aesthetic Range for Humanoid Robots , 2006 .

[10]  Christopher G. Atkeson,et al.  Adapting human motion for the control of a humanoid robot , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[11]  Hiroshi Ishiguro,et al.  Android science: conscious and subconscious recognition , 2006, Connect. Sci..

[12]  Parag Havaldar Course Notes: Performance Driven Facial Animation , 2006 .

[13]  Jodi Forlizzi,et al.  All robots are not created equal: the design and perception of humanoid robot heads , 2002, DIS '02.

[14]  Takashi Minato,et al.  Development of an Android Robot for Studying Human-Robot Interaction , 2004, IEA/AIE.

[15]  Aaron Powers,et al.  Matching robot appearance and behavior to tasks to improve human-robot cooperation , 2003, The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003..

[16]  Thomas S. Huang,et al.  Real-time speech-driven face animation with expressions using neural networks , 2002, IEEE Trans. Neural Networks.

[17]  Cynthia Breazeal,et al.  Designing sociable robots , 2002 .

[18]  H. Benjamin Brown,et al.  Controlling a marionette with human motion capture data , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[19]  David B. Kaber,et al.  Telepresence , 1998, Hum. Factors.

[20]  M. B. Stegmann,et al.  A Brief Introduction to Statistical Shape Analysis , 2002 .

[21]  Brigitte Zellner,et al.  Pauses and the temporal structure of speech , 1995 .