Imitation System for Humanoid Robotics Heads

This paper presents a new system for recognition and imitation of a set of facial expressions using the visual information acquired by the robot. Besides, the proposed system detects and imitates the interlocutor’s head pose and motion. The approach described in this paper is used for human-robot interaction (HRI), and it consists of two consecutive stages: i) a visual analysis of the human facial expression in order to estimate interlocutor’s emotional state (i.e., happiness, sadness, anger, fear, neutral) using a Bayesian approach, which is achieved in real time; and ii) an estimate of the user’s head pose and motion. This information updates the knowledge of the robot about the people in its field of view, and thus, allows the robot to use it for future actions and interactions. In this paper, both human facial expression and head motion are imitated by Muecas, a 12 degree of freedom (DOF) robotic head. This paper also introduces the concept of human and robot facial expression models, which are included inside of a new cognitive module that builds and updates selective representations of the robot and the agents in its environment for enhancing future HRI. Experimental results show the quality of the detection and imitation using different scenarios with Muecas.

[1]  Shuzhi Sam Ge,et al.  Facial expression imitation in human robot interaction , 2008, RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication.

[2]  Nicolás F. Lori,et al.  Visuo-auditory Multimodal Emotional Structure to Improve Human-Robot-Interaction , 2012, Int. J. Soc. Robotics.

[3]  Cynthia Breazeal,et al.  Recognition of Affective Communicative Intent in Robot-Directed Speech , 2002, Auton. Robots.

[4]  Paul Fitzpatrick,et al.  Head Pose Estimation Without Manual Initialization , 2000 .

[5]  Lianwen Jin,et al.  A New Facial Expression Recognition Method Based on Local Gabor Filter Bank and PCA plus LDA , 2006 .

[6]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[7]  Adriana Tapus,et al.  Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction , 2011, ECMR.

[8]  Adriana Tapus,et al.  Emulating Empathy in Socially Assistive Robotics , 2007, AAAI Spring Symposium: Multidisciplinary Collaboration for Socially Assistive Robotics.

[9]  Joochan Sohn,et al.  Deployment of a service robot to help older people , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  Dimitris N. Metaxas,et al.  The integration of optical flow and deformable models with applications to human face shape and motion estimation , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[11]  Rahul Sathawane,et al.  Analysis of Emotion Recognition using Facial Expressions, using Bezier curve , 2015 .

[12]  Ana Paiva,et al.  Caring for agents and agents that care: building empathic relations with synthetic agents , 2004, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004..

[13]  Johannes F. Broenink,et al.  Simulation, Modeling, and Programming for Autonomous Robots , 2014, Lecture Notes in Computer Science.

[14]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, International Journal of Computer Vision.

[15]  Paul A. Viola,et al.  Robust Real-time Object Detection , 2001 .

[16]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  Kwang-Eun Ko,et al.  Development of a Facial Emotion Recognition Method Based on Combining AAM with DBN , 2010, 2010 International Conference on Cyberworlds.

[18]  Tsuhan Chen,et al.  Audio-visual integration in multimodal communication , 1998, Proc. IEEE.

[19]  Silvestro Micera,et al.  2A1-O10 Emotional Expression Humanoid Robot WE-4RII : Evaluation of the perception of facial emotional expressions by using fMRI , 2007 .

[20]  T. Tsuji,et al.  Development of the Face Robot SAYA for Rich Facial Expressions , 2006, 2006 SICE-ICASE International Joint Conference.

[21]  Cynthia Breazeal,et al.  Persuasive Robotics: The influence of robot gender on human behavior , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[22]  Morton L Perel,et al.  On research. , 2013, Implant dentistry.

[23]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..

[24]  Luis J. Manso,et al.  RoboComp: A Tool-Based Robotics Framework , 2010, SIMPAR.

[25]  Bandera Rubio,et al.  Vision-based gesture recognition in a robot learning by imitation framework , 2011 .

[26]  Olaf Hellwich,et al.  3D Head Pose Estimation with Symmetry Based Illumination Model in Low Resolution Video , 2004, DAGM-Symposium.

[27]  Jiang Xiao,et al.  The Research of the Humanoid Robot with Facial Expressions for Emotional Interaction , 2008, 2008 First International Conference on Intelligent Networks and Intelligent Systems.

[28]  Hongbin Zha,et al.  Affine correspondence based head pose estimation for a sequence of images by using a 3D model , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..

[29]  A. Bruce Emotional Expression , 1883, The American Naturalist.

[30]  Maja J Matarić,et al.  Socially Assistive Robotics for Post-stroke Rehabilitation Journal of Neuroengineering and Rehabilitation Socially Assistive Robotics for Post-stroke Rehabilitation , 2007 .

[31]  Steve DiPaola,et al.  Simulating Face to Face Collaboration for Interactive Learning Systems , 2005 .

[32]  P. Ekman,et al.  What the face reveals : basic and applied studies of spontaneous expression using the facial action coding system (FACS) , 2005 .