Multimodal Feedback in First Encounter Interactions

Human interactions are predominantly conducted via verbal communication which allows presentation of sophisticated propositional content. However, much of the interpretation of the utterances and the speaker's attitudes are conveyed using multimodal cues such as facial expressions, hand gestures, head movements and body posture. This paper reports some observations on multimodal communication and feedback giving activity in first encounter interactions, and discusses how head, hand, and body movements are used in conversational interactions as means ofvisual interaction management, i.e. unobtrusive ways to control the interaction and construct shared understanding among the interlocutors. The observations and results contribute to the models for coordinating communication in human-human conversations as well as in interactions between humans and intelligent situated agents.

[1]  Jens Allwood,et al.  An activity-based approach to pragmatics , 2000, Abduction, Belief and Context in Dialogue.

[2]  Anton Nijholt,et al.  Development of Multimodal Interfaces: Active Listening and Synchrony, Second COST 2102 International Training School, Dublin, Ireland, March 23-27, 2009, Revised Selected Papers , 2010, COST 2102 Training School.

[3]  M. Argyle,et al.  Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.

[4]  Satoshi Nakamura,et al.  Spoken Dialogue Systems for Ambient Environments , 2010, Lecture Notes in Computer Science.

[5]  M. Pickering,et al.  Toward a mechanistic psychology of dialogue , 2004, Behavioral and Brain Sciences.

[6]  Kristiina Jokinen,et al.  Pointing Gestures and Synchronous Communication Management , 2009, COST 2102 Training School.

[7]  Costanza Navarretta,et al.  Feedback in Nordic First-Encounters: a Comparative Study , 2012, LREC.

[8]  Kristiina Jokinen,et al.  Rational Communication and Affordable Natural Language Interaction for Ambient Environments , 2010, IWSDS.

[9]  Elisabetta Bevacqua,et al.  Copying Behaviour of Expressive Motion , 2007, MIRAGE.

[10]  Bernard Rimé,et al.  Fundamentals of nonverbal behavior , 1991 .

[11]  Herbert H. Clark,et al.  Contributing to Discourse , 1989, Cogn. Sci..

[12]  Kristiina Jokinen,et al.  Synchrony and copying in conversational interactions , 2011 .

[13]  Catherine Pelachaud,et al.  Interacting with Embodied Conversational Agents , 2010 .

[14]  S. Greenberg,et al.  The Psychology of Everyday Things , 2012 .

[15]  Michael Kipp,et al.  ANVIL - a generic annotation tool for multimodal dialogue , 2001, INTERSPEECH.

[16]  James D. Griffith,et al.  INDIRECT DETECTION OF DECEPTION: LOOKING FOR CHANGE , 2009 .

[17]  Masafumi Nishida,et al.  Gaze and turn-taking behavior in casual conversational interactions , 2013, TIIS.

[18]  M. Tomasello First Verbs: A Case Study of Early Grammatical Development , 1994 .

[19]  Julia Hirschberg,et al.  Entrainment in Speech Preceding Backchannels. , 2011, ACL.

[20]  Kristiina Jokinen,et al.  Multimodal Feedback Signaling in Finnish , 2012, Baltic HLT.

[21]  Kristiina Jokinen,et al.  Multimodal Signals and Holistic Interaction Structuring , 2012, COLING.

[22]  Costanza Navarretta,et al.  The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena , 2007, Lang. Resour. Evaluation.

[23]  Toyoaki Nishida,et al.  Attentional Behaviors as Nonverbal Communicative Signals in Situated Interactions with Conversational Agents , 2007 .

[24]  Brent Lance,et al.  The Rickel Gaze Model: A Window on the Mind of a Virtual Human , 2007, IVA.

[25]  Danilo P. Mandic,et al.  Engineering Approaches to Conversational Informatics , 2008 .

[26]  Shashi Narayan,et al.  Proceedings of the 24th International Conference on Computational Linguistics (COLING) , 2012, International Conference on Computational Linguistics.

[27]  Stuart Adam Battersby Moving together : the organisation of non-verbal cues during multiparty conversation , 2011 .

[28]  A. Kendon Gesture: Visible Action as Utterance , 2004 .