Evaluating a Spoken Dialogue System that Detects and Adapts to User Affective States

We present an evaluation of a spoken dialogue system that detects and adapts to user disengagement and uncertainty in real-time. We compare this version of our system to a version that adapts to only user disengagement, and to a version that ignores user disengagement and uncertainty entirely. We find a significant increase in task success when comparing both affectadaptive versions of our system to our nonadaptive baseline, but only for male users.

[1]  Julia Hirschberg,et al.  Detecting Levels of Interest from Spoken Dialog with Multistream Prediction Feedback and Similarity Based Hierarchical Fusion Learning , 2011, SIGDIAL Conference.

[2]  Carolyn Penstein Rosé,et al.  The Architecture of Why2-Atlas: A Coach for Qualitative Physics Essay Writing , 2002, Intelligent Tutoring Systems.

[3]  Kristy Elizabeth Boyer,et al.  The Influence of Learner Characteristics on Task-Oriented Tutorial Dialogue , 2007, AIED.

[4]  Diane J. Litman,et al.  Benefits and challenges of real-time uncertainty detection and adaptation in a spoken dialogue computer tutor , 2011, Speech Commun..

[5]  Kasia Muldner,et al.  Affective Tutors: Automatic Detection of and Response to Student Emotion , 2010, Advances in Intelligent Tutoring Systems.

[6]  Roddy Cowie,et al.  Describing the emotional states that are expressed in speech , 2003, Speech Commun..

[7]  Jon Oberlander,et al.  Data-Driven Generation of Emphatic Facial Displays , 2006, EACL.

[8]  Cristian Danescu-Niculescu-Mizil,et al.  Chameleons in Imagined Conversations: A New Approach to Understanding Coordination of Linguistic Style in Dialogs , 2011, CMCL@ACL.

[9]  Carolyn Penstein Rosé,et al.  Tools for Authoring a Dialogue Agent that Participates in Learning Studies , 2007, AIED.

[10]  Diane J. Litman,et al.  A user modeling-based performance analysis of a wizarded uncertainty-adaptive dialogue system corpus , 2009, INTERSPEECH.

[11]  Diane J. Litman,et al.  Adapting to Multiple Affective States in Spoken Dialogue , 2012, SIGDIAL Conference.

[12]  Stuart M. Shieber,et al.  Recognizing Uncertainty in Speech , 2011, EURASIP J. Adv. Signal Process..

[13]  Björn W. Schuller,et al.  The INTERSPEECH 2009 emotion challenge , 2009, INTERSPEECH.

[14]  Jaime C. Acosta,et al.  Achieving rapport with turn-by-turn, user-responsive emotional coloring , 2011, Speech Commun..

[15]  Eric Horvitz,et al.  Models for Multiparty Engagement in Open-World Dialog , 2009, SIGDIAL Conference.

[16]  Julia Hirschberg,et al.  Acoustic-Prosodic Entrainment and Social Behavior , 2012, NAACL.

[17]  Arthur C. Graesser,et al.  A Time for Emoting: When Affect-Sensitivity Is and Isn't Effective at Promoting Deep Learning , 2010, Intelligent Tutoring Systems.

[18]  Joanna Drummond,et al.  Intrinsic and Extrinsic Evaluation of an Automatic User Disengagement Detector for an Uncertainty-Adaptive Spoken Dialogue System , 2012, NAACL.