A user modeling-based performance analysis of a wizarded uncertainty-adaptive dialogue system corpus

Motivated by prior spoken dialogue system research in user modeling, we analyze interactions between performance and user class in a dataset previously collected with two wizarded spoken dialogue tutoring systems that adapt to user uncertainty. We focus on user classes defined by expertise level and gender, and on both objective (learning) and subjective (user satisfaction) performance metrics. We find that lower expertise users learn best from one adaptive system but prefer the other, while higher expertise users learned more from one adaptive system but didn't prefer either. Female users both learn best from and prefer the same adaptive system, while males preferred one adaptive system but didn't learn more from either. Our results yield an empirical basis for future investigations into whether adaptive system performance can improve by adapting to user uncertainty differently based on user class. Copyright © 2009 ISCA.

[1]  Simulating the Behaviour of Older versus Younger Users when Interacting with Spoken Dialogue Systems , 2008, ACL.

[2]  Gregory A. Sanders,et al.  DARPA communicator: cross-system results for the 2001 evaluation , 2002, INTERSPEECH.

[3]  Carolyn Penstein Rosé,et al.  The Architecture of Why2-Atlas: A Coach for Qualitative Physics Essay Writing , 2002, Intelligent Tutoring Systems.

[4]  Oliver Lemon,et al.  User simulations for online adaptation and knowledge-alignment in troubleshooting dialogue systems , 2008 .

[5]  Jennifer Chu-Carroll Evaluating Automatic Dialogue Strategy Adaptation for a Spoken Dialogue System , 2000, ANLP.

[6]  Kristy Elizabeth Boyer,et al.  The Influence of Learner Characteristics on Task-Oriented Tutorial Dialogue , 2007, AIED.

[7]  Shrikanth S. Narayanan,et al.  Toward detecting emotions in spoken dialogs , 2005, IEEE Transactions on Speech and Audio Processing.

[8]  Ingrid Zukerman,et al.  Natural Language Processing and User Modeling: Synergies and Limitations , 2001, User Modeling and User-Adapted Interaction.

[9]  Johanna D. Moore,et al.  A comparative evaluation of socratic versus didactic tutoring , 2001 .

[10]  Ehud Reiter,et al.  Acquiring and Using Limited User Models in NLG , 2003, ENLG@EACL.

[11]  Nigel Ward,et al.  Responding to subtle, fleeting changes in the user's internal state , 2001, CHI.

[12]  S. Argamon,et al.  Hedged Responses and Expressions of Affect in Human/Human and Human/Computer Tutorial Interactions , 2004 .

[13]  K. VanLehn,et al.  Why Do Only Some Events Cause Learning During Human Tutoring? , 2003 .

[14]  A. L. Baylor,et al.  The Effects of Pedagogical Agent Voice and Animation on Learning, Motivation and Perceived Persona , 2003 .

[15]  Sebastian Möller,et al.  Evaluating spoken dialogue systems according to de-facto standards: A case study , 2007, Comput. Speech Lang..

[16]  Diane J. Litman,et al.  Adapting to Student Uncertainty Improves Tutoring Dialogues , 2009, AIED.

[17]  Shimei Pan,et al.  Designing and Evaluating an Adaptive Spoken Dialogue System , 2002, User Modeling and User-Adapted Interaction.

[18]  Rosalind W. Picard,et al.  Embedded Empathy in Continuous, Interactive Health Assessment , 2005 .

[19]  Arthur C. Graesser,et al.  Perceived Characteristics and Pedagogical Efficacy of Animated Conversational Agents , 2002, Intelligent Tutoring Systems.