Incremental Dialogue Understanding and Feedback for Multiparty, Multimodal Conversation

In order to provide comprehensive listening behavior, virtual humans engaged in dialogue need to incrementally listen, interpret, understand, and react to what someone is saying, in real time, as they are saying it. In this paper, we describe an implemented system for engaging in multiparty dialogue, including incremental understanding and a range of feedback. We present an FML message extension for feedback in multipary dialogue that can be connected to a feedback realizer. We also describe how the important aspects of that message are calculated by different modules involved in partial input processing as a speaker is talking in a multiparty dialogue.

[1]  C. Goodwin Conversational Organization: Interaction Between Speakers and Hearers , 1981 .

[2]  Stacy Marsella,et al.  EMA: A process model of appraisal dynamics , 2009, Cognitive Systems Research.

[3]  Stacy Marsella,et al.  Nonverbal Behavior Generator for Embodied Conversational Agents , 2006, IVA.

[4]  L. J. Brunner,et al.  Smiles can be back channels. , 1979 .

[5]  David DeVault,et al.  Towards Natural Language Understanding of Partial Speech Recognition Results in Dialogue Systems , 2009, HLT-NAACL.

[6]  J. Bavelas,et al.  Listeners as co-narrators. , 2000, Journal of personality and social psychology.

[7]  David R. Traum,et al.  Dynamic movement and positioning of embodied agents in multiparty conversations , 2007, AAMAS '07.

[8]  Stefan Kopp,et al.  Incremental Multimodal Feedback for Conversational Agents , 2007, IVA.

[9]  Stefan Kopp,et al.  Towards a Common Framework for Multimodal Generation: The Behavior Markup Language , 2006, IVA.

[10]  Alexander I. Rudnicky,et al.  Pocketsphinx: A Free, Real-Time Continuous Speech Recognition System for Hand-Held Devices , 2006, 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings.

[11]  A. Kendon Conducting Interaction: Patterns of Behavior in Focused Encounters , 1990 .

[12]  David DeVault,et al.  Detecting the Status of a Predictive Incremental Speech Understanding Model for Real-Time Decision-Making in a Spoken Dialogue System , 2011, INTERSPEECH.

[13]  Louis-Philippe Morency,et al.  A probabilistic multimodal approach for predicting listener backchannels , 2009, Autonomous Agents and Multi-Agent Systems.

[14]  K. Ikeda Triadic exchange pattern in multiparty communication: a case study of conversational narrative among friends , 2009 .

[15]  Stacy Marsella,et al.  Towards More Comprehensive Listening Behavior: Beyond the Bobble Head , 2011, IVA.

[16]  David Schlangen,et al.  Comparing Local and Sequential Models for Statistical Incremental Natural Language Understanding , 2010, SIGDIAL Conference.

[17]  Louis-Philippe Morency,et al.  Integration of Visual Perception in Dialogue Understanding for Virtual Humans in Multi-Party interaction , 2010, AAMAS 2010.

[18]  Howard S. Friedman,et al.  Some Effects of Gaze on Subjects Motivated to Seek or to Avoid Social Comparison. , 1978 .

[19]  A. Dittmann,et al.  Relationship between vocalizations and head nods as listener responses. , 1968, Journal of personality and social psychology.

[20]  M. Argyle,et al.  Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.

[21]  E. Goffman,et al.  Forms of talk , 1982 .

[22]  J. Allwood Linguistic communication as action and cooperation : a study in pragmatics , 1976 .

[23]  David DeVault,et al.  Interpretation of Partial Utterances in Virtual Human Dialogue Systems , 2010, NAACL.

[24]  Hilary M.W. Callan,et al.  Attention and advertence in human groups , 1973 .

[25]  R. Riggio,et al.  Effect of individual differences in nonverbal expressiveness on transmission of emotion , 1981 .

[26]  M. Argyle,et al.  The Effects of Visibility on Interaction in a Dyad , 1968 .

[27]  David DeVault,et al.  Toward Rapid Development of Multi-Party Virtual Human Negotiation Scenarios , 2011 .

[28]  Norman I. Badler,et al.  Visual Attention and Eye Gaze During Multiparty Conversations with Distractions , 2006, IVA.

[29]  David DeVault,et al.  Incremental interpretation and prediction of utterance meaning for interactive dialogue , 2011, Dialogue Discourse.

[30]  David R. Traum,et al.  Embodied agents for multi-party dialogue in immersive virtual worlds , 2002, AAMAS '02.

[31]  Stacy Marsella,et al.  Natural Behavior of a Listening Agent , 2005, IVA.

[32]  David Traum,et al.  Semantics and Pragmatics of Questions and Answers for Dialogue Agents , 2003 .

[33]  David R. Traum,et al.  A Common Ground for Virtual Humans: Using an Ontology in a Natural Language Oriented Virtual Human Architecture , 2008, LREC.

[34]  Stefan Kopp,et al.  The Next Step towards a Function Markup Language , 2008, IVA.

[35]  Stacy Marsella,et al.  A domain-independent framework for modeling emotion , 2004, Cognitive Systems Research.