Imitation of human expressions based on emotion estimation by mental simulation

Abstract Humans can express their own emotion and estimate the emotional states of others during communication. This paper proposes a unified model that can estimate the emotional states of others and generate emotional self-expressions. The proposed model utilizes a multimodal restricted Boltzmann machine (RBM) —a type of stochastic neural network. RBMs can abstract latent information from input signals and reconstruct the signals from it. We use these two characteristics to rectify issues affecting previously proposed emotion models: constructing an emotional representation for estimation and generation for emotion instead of heuristic features, and actualizing mental simulation to infer the emotion of others from their ambiguous signals. Our experimental results showed that the proposed model can extract features representing the distribution of categories of emotion via self-organized learning. Imitation experiments demonstrated that using our model, a robot can generate expressions better than with a direct mapping mechanism when the expressions of others contain emotional inconsistencies.Moreover, our model can improve the estimated belief in the emotional states of others through the generation of imaginary sensory signals from defective multimodal signals (i.e., mental simulation). These results suggest that these abilities of the proposed model can facilitate emotional human–robot communication in more complex situations.

[1]  Cynthia Breazeal,et al.  Emotion and sociable humanoid robots , 2003, Int. J. Hum. Comput. Stud..

[2]  Tapani Raiko,et al.  Improved Learning of Gaussian-Bernoulli Restricted Boltzmann Machines , 2011, ICANN.

[3]  Gabriele Trovato,et al.  Generation of humanoid Robot's Facial Expressions for Context-Aware Communication , 2013, Int. J. Humanoid Robotics.

[4]  Kazuyuki Aihara,et al.  Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets , 2011, ACML.

[5]  Shohei Kato,et al.  A Model for Generating Facial Expressions Using Virtual Emotion Based on Simple Recurrent Network , 2010, J. Adv. Comput. Intell. Intell. Informatics.

[6]  Hiroshi G. Okuno,et al.  A Recipe for Empathy , 2015, Int. J. Soc. Robotics.

[7]  Frank Van Overwalle,et al.  Understanding others' actions and goals by mirror and mentalizing systems: A meta-analysis , 2009, NeuroImage.

[8]  Henning Riecke Making the Best of Difference , 2003 .

[9]  C. Davis Touch , 1997, The Lancet.

[10]  H. Itoh,et al.  EMOTIVE FACIAL EXPRESSIONS OF SENSITIVITY COMMUNICATION ROBOT “IFBOT” , 2005 .

[11]  Juhan Nam,et al.  Multimodal Deep Learning , 2011, ICML.

[12]  V. Ramachandran,et al.  Face to face: Blocking facial mimicry can selectively impair recognition of emotional expressions , 2007, Social neuroscience.

[13]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[14]  Mike E. Davies,et al.  IEEE International Conference on Acoustics Speech and Signal Processing , 2008 .

[15]  Minoru Asada,et al.  Touch and emotion: Modeling of developmental differentiation of emotion lead by tactile dominance , 2013, 2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL).

[16]  Cynthia Breazeal,et al.  Learning From and About Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots , 2005, Artificial Life.

[17]  S. Campanella,et al.  Integrating face and voice in person perception , 2007, Trends in Cognitive Sciences.

[18]  J. Russell A circumplex model of affect. , 1980 .

[19]  Carlos Busso,et al.  IEMOCAP: interactive emotional dyadic motion capture database , 2008, Lang. Resour. Evaluation.

[20]  Pascal Belin,et al.  Crossmodal Adaptation in Right Posterior Superior Temporal Sulcus during Face–Voice Emotional Integration , 2014, The Journal of Neuroscience.

[21]  Ingo Lütkebohle,et al.  The bielefeld anthropomorphic robot head “Flobi” , 2010, 2010 IEEE International Conference on Robotics and Automation.

[22]  Britta Wrede,et al.  Playing a different imitation game: Interaction with an Empathic Android Robot , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[23]  Shohei Kato,et al.  Facial expressions using emotional space in sensitivity communication robot "Ifbot" , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[24]  Hiroshi G. Okuno,et al.  The MEI Robot: Towards Using Motherese to Develop Multimodal Emotional Intelligence , 2014, IEEE Transactions on Autonomous Mental Development.

[25]  J. Meigs,et al.  WHO Technical Report , 1954, The Yale Journal of Biology and Medicine.

[26]  G. Rizzolatti,et al.  Understanding motor events: a neurophysiological study , 2004, Experimental Brain Research.

[27]  Cynthia Breazeal,et al.  Recognition of Affective Communicative Intent in Robot-Directed Speech , 2002, Auton. Robots.

[28]  A. Goldman,et al.  Mirror neurons and the simulation theory of mind-reading , 1998, Trends in Cognitive Sciences.

[29]  Peter Robinson,et al.  An Android Head for Social-Emotional Intervention for Children with Autism Spectrum Conditions , 2011, ACII.

[30]  P. Radford,et al.  Imitation , 2023 .

[31]  Michael S. Beauchamp,et al.  Touch, sound and vision in human superior temporal sulcus , 2008, NeuroImage.

[32]  Gabriele Trovato,et al.  Impression survey of the emotion expression humanoid robot with mental model based dynamic emotions , 2013, 2013 IEEE International Conference on Robotics and Automation.

[33]  Maja J. Mataric,et al.  A Framework for Automatic Human Emotion Classification Using Emotion Profiles , 2011, IEEE Transactions on Audio, Speech, and Language Processing.