ExpressionBot: An emotive lifelike robotic face for face-to-face communication

This article proposes an emotive lifelike robotic face, called ExpressionBot, that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The proposed robotic head consists of two major components: 1) a hardware component that contains a small projector, a fish-eye lens, a custom-designed mask and a neck system with 3 degrees of freedom; 2) a facial animation system, projected onto the robotic mask, that is capable of presenting facial expressions, realistic eye movement, and accurate visual speech. We present three studies that compare Human-Robot Interaction with Human-Computer Interaction with a screen-based model of the avatar. The studies indicate that the robotic face is well accepted by users, with some advantages in recognition of facial expression and mutual eye gaze contact.

[1]  David Hanson,et al.  Zeno: A cognitive character , 2008, AAAI 2008.

[2]  Tony Belpaeme,et al.  A study of a retro-projected robotic face and its effectiveness for gaze reading by humans , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[3]  Gordon Cheng,et al.  Development of an integrated multi-modal communication robotic face , 2012, 2012 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO).

[4]  Xue Yan,et al.  iCat: an animated user-interface robot with personality , 2005, AAMAS '05.

[5]  Gordon Cheng,et al.  “Mask-bot”: A life-size robot head using talking head animation for human-robot communication , 2011, 2011 11th IEEE-RAS International Conference on Humanoid Robots.

[6]  Gabriel Skantze,et al.  Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction , 2011, COST 2102 Training School.

[7]  R. Simmons,et al.  Grace and George : Social Robots at AAAI , 2004 .

[8]  Tony Belpaeme,et al.  A study of a retro-projected robotic face and its effectiveness for gaze reading by humans , 2010, HRI 2010.

[9]  Dejan Todorović,et al.  Geometrical basis of perception of gaze direction , 2006, Vision Research.

[10]  Tony Belpaeme,et al.  Towards retro-projected robot faces: An alternative to mechatronic and android faces , 2009, RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication.

[11]  E. Vajda Handbook of the International Phonetic Association: A Guide to the Use of the International Phonetic Alphabet , 2000 .

[12]  Katherine B. Martin,et al.  Facial Action Coding System , 2015 .

[13]  Karl F. MacDorman,et al.  The Uncanny Valley [From the Field] , 2012, IEEE Robotics Autom. Mag..

[14]  Ronald A. Cole,et al.  Animating visible speech and facial expressions , 2004, The Visual Computer.

[15]  Maja J. Mataric,et al.  The role of physical embodiment in human-robot interaction , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[16]  Daniel Bolaños The Bavieca open-source speech recognition toolkit , 2012, 2012 IEEE Spoken Language Technology Workshop (SLT).

[17]  Minoru Hashimoto,et al.  Facial expression of a robot using a curved surface display , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.