Affective Conversational Interfaces

In order to build artificial conversational interfaces that display behaviors that are credible and expressive, we should endow them with the capability to recognize, adapt to, and render emotion. In this chapter, we explain how the recognition of emotional aspects is managed within conversational interfaces, including modeling and representation, emotion recognition from physiological signals, acoustics, text, facial expressions, and gestures and how emotion synthesis is managed through expressive speech and multimodal embodied agents. We also cover the main open tools and databases available for developers wishing to incorporate emotion into their conversational interfaces.

[1]  Ana Paiva,et al.  Affect recognition for interactive companions: challenges and design in real world scenarios , 2009, Journal on Multimodal User Interfaces.

[2]  Björn Schuller,et al.  The Automatic Recognition of Emotions in Speech , 2011 .

[3]  Christine L. Lisetti,et al.  Emotion recognition from physiological signals using wireless sensors for presence technologies , 2004, Cognition, Technology & Work.

[4]  Ronen Feldman,et al.  Techniques and applications for sentiment analysis , 2013, CACM.

[5]  Stefan Kopp,et al.  Towards a Common Framework for Multimodal Generation: The Behavior Markup Language , 2006, IVA.

[6]  Costanza Navarretta,et al.  Head Movements, Facial Expressions and Feedback in Danish First Encounters Interactions: A Culture-Specific Analysis , 2011, HCI.

[7]  S. Marsella,et al.  Social Emotions in Nature and Artifact , 2013 .

[8]  P. Ekman,et al.  What the face reveals : basic and applied studies of spontaneous expression using the facial action coding system (FACS) , 2005 .

[9]  Ramón López-Cózar,et al.  Predicting user mental states in spoken dialogue systems , 2011, EURASIP J. Adv. Signal Process..

[10]  Kang Liu,et al.  Book Review: Sentiment Analysis: Mining Opinions, Sentiments, and Emotions by Bing Liu , 2015, CL.

[11]  Mohammed Slim Ben Mimoun,et al.  Case study—Embodied virtual agents: An analysis on reasons for failure , 2012 .

[12]  Catherine Pelachaud,et al.  Modelling multimodal expression of emotion in a virtual agent , 2009, Philosophical Transactions of the Royal Society B: Biological Sciences.

[13]  Jason Williams,et al.  Emotion Recognition Using Bio-sensors: First Steps towards an Automatic System , 2004, ADS.

[14]  Zoraida Callejas Carrión,et al.  Sentiment Analysis: From Opinion Mining to Human-Agent Interaction , 2016, IEEE Transactions on Affective Computing.

[15]  Clifford Nass,et al.  Does computer-generated speech manifest personality? an experimental test of similarity-attraction , 2000, CHI.

[16]  P. Ekman Emotions Revealed: Recognizing Faces and Feelings to Improve Communication and Emotional Life , 2003 .

[17]  C. Pelachaud,et al.  Emotion-Oriented Systems: The Humaine Handbook , 2011 .

[18]  M Murugappan,et al.  Physiological signals based human emotion Recognition: a review , 2011, 2011 IEEE 7th International Colloquium on Signal Processing and its Applications.

[19]  Maya B. Mathur,et al.  Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley , 2016, Cognition.

[20]  Zhengyou Zhang,et al.  A Survey of Recent Advances in Face Detection , 2010 .

[21]  Paul Boersma,et al.  Praat, a system for doing phonetics by computer , 2002 .

[22]  Dirk Heylen,et al.  Representing Communicative Functions in SAIBA with a Unified Function Markup Language , 2014, IVA.

[23]  K. H. Kim,et al.  Emotion recognition system using short-term monitoring of physiological signals , 2004, Medical and Biological Engineering and Computing.

[24]  Stacy Marsella,et al.  EMA: A process model of appraisal dynamics , 2009, Cognitive Systems Research.

[25]  Constantine Kotropoulos,et al.  Emotional speech recognition: Resources, features, and methods , 2006, Speech Commun..

[26]  Marc Schröder,et al.  Expressive Speech Synthesis: Past, Present, and Possible Futures , 2009, Affective Information Processing.

[27]  Jennifer Healey,et al.  Toward Machine Emotional Intelligence: Analysis of Affective Physiological State , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[28]  Corina Yen,et al.  The Man Who Lied to His Laptop: What We Can Learn About Ourselves from Our Machines , 2010 .

[29]  Björn W. Schuller,et al.  Recent developments in openSMILE, the munich open-source multimedia feature extractor , 2013, ACM Multimedia.

[30]  S. R. Mahadeva Prasanna,et al.  Expressive speech synthesis: a review , 2013, Int. J. Speech Technol..

[31]  Simon Lucey,et al.  Automated Facial Expression Recognition System , 2009, 43rd Annual 2009 International Carnahan Conference on Security Technology.

[32]  Eero Väyrynen,et al.  Emotion recognition from speech using prosodic features , 2014 .

[33]  Björn Schuller,et al.  Computational Paralinguistics , 2013 .

[34]  Felix Burkhardt,et al.  Emofilt: the simulation of emotional speech by prosody-transformation , 2005, INTERSPEECH.

[35]  J. Kätsyri,et al.  A review of empirical evidence on different uncanny valley hypotheses: support for perceptual mismatch as one road to the valley of eeriness , 2015, Front. Psychol..

[36]  Maja Pantic,et al.  Web-based database for facial expression analysis , 2005, 2005 IEEE International Conference on Multimedia and Expo.

[37]  Roddy Cowie,et al.  Describing the emotional states that are expressed in speech , 2003, Speech Commun..

[38]  M. Bartlett,et al.  Machine Analysis of Facial Expressions , 2007 .

[39]  Tobias Baur,et al.  The social signal interpretation (SSI) framework: multimodal signal processing and recognition in real-time , 2013, ACM Multimedia.

[40]  Byoung-Jun Park,et al.  Emotion classification based on bio-signals emotion recognition using machine learning algorithms , 2014, 2014 International Conference on Information Science, Electronics and Electrical Engineering.

[41]  Elisabeth André,et al.  Emotion recognition based on physiological changes in music listening , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[42]  Costanza Navarretta,et al.  The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena , 2007, Lang. Resour. Evaluation.