Modeling Users Emotional State for an Enhanced Human-Machine Interaction

Spoken conversational agents have been proposed to enable a more natural and intuitive interaction with the environment and human-computer interfaces. In this paper, we propose a framework to model the user’s emotional state during the dialog and adapt the dialog model dynamically, thus developing more efficient, adapted, and usable conversational agents. We have evaluated our proposal developing a user-adapted agent that facilitates touristic information, and provide a detailed discussion of the positive influence of our proposal in the success of the interaction, the information and services provided, as well as the perceived quality.

[1]  Timothy Bickmore,et al.  Some Novel Aspects of Health Communication from a Dialogue Systems Perspective , 2004, AAAI Technical Report.

[2]  David Griol,et al.  Development of Interactive Virtual Voice Portals to Provide Municipal Information , 2012, DCAI.

[3]  Wolfgang Minker,et al.  Proactive Spoken Dialogue Interaction in Multi-Party Environments , 2010 .

[4]  John H. L. Hansen,et al.  Analysis and compensation of speech under stress and noise for environmental robustness in speech recognition , 1996, Speech Commun..

[5]  Diane J. Litman,et al.  Recognizing student emotions and attitudes on the basis of utterances in spoken tutoring dialogues with both human and computer tutors , 2006, Speech Commun..

[6]  Ruili Wang,et al.  Ensemble methods for spoken emotion recognition in call-centres , 2007, Speech Commun..

[7]  Loïc Kessous,et al.  Whodunnit - Searching for the most important feature types signalling emotion-related user states in speech , 2011, Comput. Speech Lang..

[8]  Elisabeth André,et al.  Exploiting emotions to disambiguate dialogue acts , 2004, IUI '04.

[9]  Wolfgang Minker,et al.  Emotion recognition and adaptation in spoken dialogue systems , 2010, Int. J. Speech Technol..

[10]  Dilek Z. Hakkani-Tür,et al.  Grounding Emotions in Human-Machine Conversational Systems , 2005, INTETAIN.

[11]  Sokratis Kartakis,et al.  A design-and-play approach to accessible user interface development in Ambient Intelligence environments , 2010, Comput. Ind..

[12]  Elmar Nöth,et al.  A Taxonomy of Applications that Utilize Emotional Awareness , 2006 .

[13]  Yorick Wilks,et al.  Some background on dialogue management and conversational speech for dialogue systems , 2011, Comput. Speech Lang..

[14]  Kallirroi Georgila,et al.  Quantitative Evaluation of User Simulation Techniques for Spoken Dialogue Systems , 2005, SIGDIAL.

[15]  Anton Nijholt,et al.  A tractable hybrid DDN–POMDP approach to affective dialogue modeling for probabilistic frame-based dialogue systems , 2008, Natural Language Engineering.

[16]  Tim Polzehl,et al.  Emotion detection in dialog systems: Applications, strategies and challenges , 2009, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.

[17]  Ramón López-Cózar,et al.  Influence of contextual information in emotion annotation for spoken dialogue systems , 2008, Speech Commun..

[18]  Björn W. Schuller,et al.  Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge , 2011, Speech Commun..

[19]  Steve J. Young,et al.  Partially observable Markov decision processes for spoken dialog systems , 2007, Comput. Speech Lang..

[20]  Ian H. Witten,et al.  Data mining: practical machine learning tools and techniques, 3rd Edition , 1999 .

[21]  José Neves,et al.  Context-Aware Emotion-Based Model for Group Decision Making , 2010, IEEE Intelligent Systems.

[22]  Carlos Ramos,et al.  Personality, Emotion, and Mood in Agent-Based Group Decision Making , 2011, IEEE Intelligent Systems.

[23]  Constantine Kotropoulos,et al.  Emotional speech recognition: Resources, features, and methods , 2006, Speech Commun..

[24]  Roberto Pieraccini The Voice in the Machine: Building Computers That Understand Speech , 2012 .

[25]  Teddy Surya Gunawan,et al.  SMaTTS: Standard Malay Text to Speech System , 2007 .

[26]  Jaime C. Acosta,et al.  Responding to user emotional state by adding emotional coloring to utterances , 2009, INTERSPEECH.

[27]  Hua Ai,et al.  Comparing Spoken Dialog Corpora Collected with Recruited Subjects versus Real Users , 2007, SIGDIAL.