On Designing Expressive Robot Behavior: The Effect of Affective Cues on Interaction

[1]  Johannes Hewig,et al.  A revised film set for the induction of basic emotions. , 2005 .

[2]  Cynthia Breazeal,et al.  Persuasive Robotics: The influence of robot gender on human behavior , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  S. Scott,et al.  Perceptual Cues in Nonverbal Vocal Expressions of Emotion , 2010 .

[4]  P. Shaver,et al.  Individual differences in emotional complexity: their psychological implications. , 2004, Journal of personality.

[5]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[6]  Rainer Reisenzein,et al.  The Cambridge Handbook of Personality Psychology: Personality and emotion , 2009 .

[7]  Adam Kendon,et al.  How gestures can become like words , 1988 .

[8]  BRIEF REPORT , 2003, Cognition & emotion.

[9]  Loïc Kessous,et al.  Multimodal emotion recognition from expressive faces, body gestures and speech , 2007, AIAI.

[10]  Dorret I. Boomsma,et al.  Relationships between trait emotional intelligence and the Big Five in the Netherlands , 2010 .

[11]  H. Berenbaum,et al.  The dimensions of emotional intelligence, alexithymia, and mood awareness: Associations with personality and performance on an emotional stroop task , 2003 .

[12]  Cendri A. C. Hutcherson,et al.  Attention and emotion influence the relationship between extraversion and neural response. , 2008, Social cognitive and affective neuroscience.

[13]  Adriana Tapus,et al.  Towards Enhancing Human-Robot Relationship: Customized Robot's Behavior to Human's Profile , 2014, AAAI Fall Symposia.

[14]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[15]  Youngwoo Yoon,et al.  Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[16]  Oudeyer Pierre-Yves,et al.  The production and recognition of emotions in speech: features and algorithms , 2003 .

[17]  Justine Cassell,et al.  BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.

[18]  Stacy Marsella,et al.  Predicting Co-verbal Gestures: A Deep and Temporal Modeling Approach , 2015, IVA.

[19]  H. Gunes,et al.  Live human–robot interactive public demonstrations with automatic emotion and personality prediction , 2019, Philosophical Transactions of the Royal Society B.

[20]  Stefan Kopp,et al.  A friendly gesture: Investigating the effect of multimodal robot behavior in human-robot interaction , 2011, 2011 RO-MAN.

[21]  Heiga Zen,et al.  Statistical parametric speech synthesis using deep neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[22]  Roddy Cowie,et al.  Describing the emotional states that are expressed in speech , 2003, Speech Commun..

[23]  Sigal G. Barsade,et al.  Human abilities: emotional intelligence. , 2008, Annual review of psychology.

[24]  Jan B. F. van Erp,et al.  Emotional Responses to Multisensory Environmental Stimuli , 2016 .

[25]  Kazuhiko Sumi,et al.  Evaluation of Speech-to-Gesture Generation Using Bi-Directional LSTM Network , 2018, IVA.

[26]  Catherine Pelachaud,et al.  Multimodal expressive embodied conversational agents , 2005, ACM Multimedia.

[27]  Bernard Bel,et al.  Speech prosody 2002, Aix-en Provence, France, 11-13 April 2002 : proceedings , 2002 .

[28]  Adam Kendon,et al.  THE STUDY OF GESTURE: SOME REMARKS ON ITS HISTORY , 1983 .

[29]  Dimitris Samaras,et al.  EyeOpener: Editing Eyes in the Wild , 2017, ACM Trans. Graph..

[30]  Yong Tao,et al.  Compound facial expressions of emotion , 2014, Proceedings of the National Academy of Sciences.

[31]  Mike Edgington,et al.  Investigating the limitations of concatenative synthesis , 1997, EUROSPEECH.

[32]  A. Tapus,et al.  A Voice-Based Gender and Internal State Combined Detection Model , 2020 .

[33]  John L. Arnott,et al.  Implementation and testing of a system for producing emotion-by-rule in synthetic speech , 1995, Speech Commun..

[34]  Naoshi Kaneko,et al.  Analyzing Input and Output Representations for Speech-Driven Gesture Generation , 2019, IVA.

[35]  Guy Hoffman,et al.  Design and Evaluation of a Peripheral Robotic Conversation Companion , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[36]  Justine Cassell,et al.  Human conversation as a system framework: designing embodied conversational agents , 2001 .

[37]  Jean-Claude Martin,et al.  Combining Facial and Postural Expressions of Emotions in a Virtual Character , 2009, IVA.

[38]  Cynthia Breazeal,et al.  Toward sociable robots , 2003, Robotics Auton. Syst..

[39]  Sercan Ömer Arik,et al.  Deep Voice 2: Multi-Speaker Neural Text-to-Speech , 2017, NIPS.

[40]  Adriana Tapus,et al.  Towards an intelligent system for generating an adapted verbal and nonverbal combined behavior in human–robot interaction , 2015, Autonomous Robots.

[41]  S. Mozziconacci Prosody and emotions , 2002, Proceedings of the International Conference on Speech Prosody.

[42]  James Townsend,et al.  Making faces: Creating three-dimensional parameterized models of facial expression , 2001, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[43]  Rolf-Detlef Treede,et al.  Emotion Elicitation: A Comparison of Pictures and Films , 2016, Front. Psychol..

[44]  Lisa Feldman Barrett,et al.  Unpacking Emotion Differentiation , 2015 .

[45]  P. Salovey,et al.  Emotional Intelligence , 1990, Encyclopedia of Personality and Individual Differences.

[46]  Katarzyna Wac,et al.  Multimodal Integration of Emotional Signals from Voice, Body, and Context: Effects of (In)Congruence on Emotion Recognition and Attitudes Towards Robots , 2019, Int. J. Soc. Robotics.

[47]  Catherine Pelachaud,et al.  A Common Gesture and Speech Production Framework for Virtual and Physical Agents , 2012 .

[48]  Nick Campbell,et al.  Speech Database Design for a Concatenative Text-to-Speech Synthesis System for Individuals with Communication Disorders , 2003, Int. J. Speech Technol..

[49]  André Thomas,et al.  Service Orientation in Holonic and Multi-agent Manufacturing , 2015, Service Orientation in Holonic and Multi-agent Manufacturing.

[50]  John P. Baker,et al.  Are emotional clarity and emotion differentiation related? , 2013, Cognition & emotion.

[51]  Evgenios Vlachos,et al.  Android Emotions Revealed , 2012, ICSR.

[52]  C. Breazeal Towards Sociable Robots , 2002 .

[53]  Daichi Mochihashi,et al.  A Probabilistic Approach to Unsupervised Induction of Combinatory Categorial Grammar in Situated Human-Robot Interaction , 2018, 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids).

[54]  Giorgio Metta,et al.  Design of the robot-cub (iCub) head , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[55]  Tadahiro Taniguchi,et al.  Towards Understanding Language through Perception in Situated Human-Robot Interaction: From Word Grounding to Grammar Induction , 2018, ArXiv.

[56]  Pierre-Yves Oudeyer,et al.  The production and recognition of emotions in speech: features and algorithms , 2003, Int. J. Hum. Comput. Stud..

[57]  K. Scherer,et al.  Personality and emotion , 2009 .

[58]  R. Adolphs,et al.  Emotion Perception from Face, Voice, and Touch: Comparisons and Convergence , 2017, Trends in Cognitive Sciences.

[59]  Soo-Young Lee,et al.  Emotional End-to-End Neural Speech Synthesizer , 2017, NIPS 2017.

[60]  Jaakko Lehtinen,et al.  Audio-driven facial animation by joint end-to-end learning of pose and emotion , 2017, ACM Trans. Graph..

[61]  Ingo Lütkebohle,et al.  The bielefeld anthropomorphic robot head “Flobi” , 2010, 2010 IEEE International Conference on Robotics and Automation.

[62]  Chunghyun Ahn,et al.  Emotional Speech Synthesis with Rich and Granularized Control , 2019, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[63]  Marc Schröder,et al.  The German Text-to-Speech Synthesis System MARY: A Tool for Research, Development and Teaching , 2003, Int. J. Speech Technol..

[64]  Rosalind W. Picard Affective computing: challenges , 2003, Int. J. Hum. Comput. Stud..

[65]  Samy Bengio,et al.  Tacotron: Towards End-to-End Speech Synthesis , 2017, INTERSPEECH.

[66]  Adriana Tapus,et al.  Multimodal adapted robot behavior synthesis within a narrative human-robot interaction , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[67]  Adriana Tapus,et al.  User adaptable robot behavior , 2011, 2011 International Conference on Collaboration Technologies and Systems (CTS).

[68]  Amy Isard,et al.  SSML: A speech synthesis markup language , 1997, Speech Commun..

[69]  Adriana Tapus,et al.  Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction , 2011, ECMR.

[70]  Emily S. Cross,et al.  The Perception of Emotion in Artificial Agents , 2018, IEEE Transactions on Cognitive and Developmental Systems.

[71]  P. Salovey,et al.  Emotional development and emotional intelligence: Educational implications. , 1997 .

[72]  Adriana Tapus,et al.  Towards an online voice-based gender and internal state detection model , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[73]  Jan De Houwer,et al.  A time course analysis of the affective priming effect , 2001 .

[74]  A. Tellegen,et al.  PERSONALITY PROCESSES AND INDIVIDUAL DIFFERENCES An Alternative "Description of Personality": The Big-Five Factor Structure , 2022 .

[75]  H. Wallbott Bodily expression of emotion , 1998 .

[76]  Adriana Tapus,et al.  An Online Fuzzy-Based Approach for Human Emotions Detection: An Overview on the Human Cognitive Model of Understanding and Generating Multimodal Actions , 2015, Intelligent Assistive Robots.

[77]  S. Hemenover,et al.  Is dispositional emotional intelligence synonymous with personality? , 2006 .

[78]  Goldie Nejat,et al.  Recognizing Emotional Body Language Displayed by a Human-like Social Robot , 2014, International Journal of Social Robotics.

[79]  Reginald B. Adams,et al.  Development and validation of Image Stimuli for Emotion Elicitation (ISEE): A novel affective pictorial system with test-retest repeatability , 2018, Psychiatry Research.

[80]  A. Aly A Bayesian Approach to Phrase Understanding through Cross-Situational Learning , 2018 .

[81]  Adriana Tapus,et al.  Prosody-based adaptive metaphoric head and arm gestures synthesis in human robot interaction , 2013, 2013 16th International Conference on Advanced Robotics (ICAR).

[82]  Pedro B. Albuquerque,et al.  Emotional Induction Through Music: Measuring Cardiac and Electrodermal Responses of Emotional States and Their Persistence , 2019, Front. Psychol..

[83]  Angel P. del Pobil,et al.  The Effects of Robot's Body Gesture and Gender in Human-Robot Interaction , 2011 .

[84]  D. McNeill Hand and Mind: What Gestures Reveal about Thought , 1992 .

[85]  Frank K. Soong,et al.  On the training aspects of Deep Neural Network (DNN) for parametric TTS synthesis , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[86]  L. de Silva,et al.  Facial emotion recognition using multi-modal information , 1997, Proceedings of ICICS, 1997 International Conference on Information, Communications and Signal Processing. Theme: Trends in Information Systems Engineering and Wireless Multimedia Communications (Cat..

[87]  Yisong Yue,et al.  A deep learning approach for generalized speech animation , 2017, ACM Trans. Graph..

[88]  Cristina P. Santos,et al.  Facial Expressions and Gestures to Convey Emotions with a Humanoid Robot , 2013, ICSR.

[89]  Adriana Tapus,et al.  Prosody-driven robot arm gestures generation in human-robot interaction , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[90]  Loïc Kessous,et al.  Emotion Recognition through Multiple Modalities: Face, Body Gesture, Speech , 2008, Affect and Emotion in Human-Computer Interaction.

[91]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[92]  Sonja A. Kotz,et al.  On the Time Course of Vocal Emotion Recognition , 2011, PloS one.

[93]  Daniel Thalmann,et al.  SMILE: A Multilayered Facial Animation System , 1991, Modeling in Computer Graphics.

[94]  Xue Yan,et al.  iCat: an animated user-interface robot with personality , 2005, AAMAS '05.

[95]  Amir Aly,et al.  Towards an Interactive Human-Robot Relationship: Developing a Customized Robot Behavior to Human Profile. (Vers une relation Homme-Robot Interactive : développement d'un comportement du Robot adapté au Profil de l'Homme) , 2014 .

[96]  P. Ekman,et al.  The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding , 1969 .

[97]  Douglas Paton,et al.  Exploring the Demands on Nurses Working in Health Care Facilities During a Large-Scale Natural Disaster , 2016 .

[98]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents , 2004, Comput. Animat. Virtual Worlds.

[99]  Maja Pantic,et al.  End-to-End Speech-Driven Facial Animation with Temporal GANs , 2018, BMVC.

[100]  Norman I. Badler,et al.  Animating facial expressions , 1981, SIGGRAPH '81.

[101]  M. Bano,et al.  Emotional Intelligence and Personality Traits among University Teachers: Relationship and Gender Differences , 2013 .