Analysis of affective cues in human-robot interaction: A multi-level approach

This paper reviews some of the key challenges in affect recognition research for the purpose of designing affect sensitive social robots. An important requirement for a social robot is to be endowed with recognition abilities that vary according to the context of interaction. This paper presents an approach for the analysis of different affective cues depending on the distance at which user and robot interact.

[1]  K. Scherer,et al.  Automated Analysis of Body Movement in Emotionally Expressive Piano Performances , 2008 .

[2]  Hatice Gunes,et al.  How to distinguish posed from spontaneous smiles using geometric features , 2007, ICMI '07.

[3]  Alex Pentland,et al.  Social signal processing: state-of-the-art and future perspectives of an emerging domain , 2008, ACM Multimedia.

[4]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Peter Robinson,et al.  Generalization of a Vision-Based Computational Model of Mind-Reading , 2005, ACII.

[6]  Antonio Camurri,et al.  Recognizing emotion from dance movement: comparison of spectator recognition and automated techniques , 2003, Int. J. Hum. Comput. Stud..

[7]  Christopher E. Peters A Perceptually-Based Theory of Mind for Agent Interaction Initiation , 2006, Int. J. Humanoid Robotics.

[8]  Loïc Kessous,et al.  Emotion Recognition through Multiple Modalities: Face, Body Gesture, Speech , 2008, Affect and Emotion in Human-Computer Interaction.

[9]  Ashish Kapoor,et al.  Multimodal affect recognition in learning environments , 2005, ACM Multimedia.

[10]  Brian Scassellati,et al.  Active vision for sociable robots , 2001, IEEE Trans. Syst. Man Cybern. Part A.

[11]  Laurence Devillers,et al.  Real-life emotions detection with lexical and paralinguistic cues on human-human call center dialogs , 2006, INTERSPEECH.

[12]  Kerstin Dautenhahn,et al.  Socially intelligent robots: dimensions of human–robot interaction , 2007, Philosophical Transactions of the Royal Society B: Biological Sciences.

[13]  Yuxiao Hu,et al.  Training combination strategy of multi-stream fused hidden Markov model for audio-visual affect recognition , 2006, MM '06.

[14]  Maja Pantic,et al.  Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences , 2006, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[15]  Caroline A. Bartel,et al.  The Collective Construction of Work Group Moods , 2000 .

[16]  Kostas Karpouzis,et al.  Emotion recognition through facial expression analysis based on a neurofuzzy network , 2005, Neural Networks.

[17]  B. Stein,et al.  The Merging of the Senses , 1993 .

[18]  Cynthia Breazeal,et al.  Emotion and sociable humanoid robots , 2003, Int. J. Hum. Comput. Stud..

[19]  Ashish Kapoor,et al.  Automatic prediction of frustration , 2007, Int. J. Hum. Comput. Stud..

[20]  K. Scherer Psychological models of emotion. , 2000 .