Collaborative Effort towards Common Ground in Situated Human-Robot Dialogue

In situated human-robot dialogue, although humans and robots are co-present in a shared environment, they have significantly mismatched capabilities in perceiving the shared environment. Their representations of the shared world are misaligned. In order for humans and robots to communicate with each other successfully using language, it is important for them to mediate such differences and to establish common ground. To address this issue, this paper describes a dialogue system that aims to mediate a shared perceptual basis during human-robot dialogue. In particular, we present an empirical study that examines the role of the robot’s collaborative effort and the performance of natural language processing modules in dialogue grounding. Our empirical results indicate that in situated human-robot dialogue, a low collaborative effort from the robot may lead its human partner to believe a common ground is established. However, such beliefs may not reflect true mutual understanding. To support truly grounded dialogues, the robot should make an extra effort by making its partner aware of its internal representation of the shared world.

[1]  Sara Kiesler,et al.  Common Ground in Dialogue with a Gendered Humanoid Robot , 2005 .

[2]  H. H. Clark,et al.  Referring as a collaborative process , 1986, Cognition.

[3]  Maya Cakmak,et al.  Designing robot learners that ask good questions , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[4]  Pamela J. Hinds,et al.  Autonomy and Common Ground in Human-Robot Interaction: A Field Study , 2007, IEEE Intelligent Systems.

[5]  David R Traum,et al.  Towards a Computational Theory of Grounding in Natural Language Conversation , 1991 .

[6]  Herbert H. Clark,et al.  Contributing to Discourse , 1989, Cogn. Sci..

[7]  Manuela M. Veloso,et al.  Interactive robot task training through dialog and demonstration , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[8]  Nick Hawes,et al.  Incremental , multi-level processing for comprehending situated dialogue in human-robot interaction , 2007 .

[9]  Stefanie Tellex,et al.  Clarifying commands with information-theoretic human-robot dialog , 2013, HRI 2013.

[10]  Eric Horvitz,et al.  Models for Multiparty Engagement in Open-World Dialog , 2009, SIGDIAL Conference.

[11]  Illah R. Nourbakhsh,et al.  Using a robot proxy to create common ground in exploration tasks , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[12]  Candace L. Sidner,et al.  Recognizing engagement in human-robot interaction , 2010, HRI 2010.

[13]  Stefan Kopp,et al.  Co-constructing Grounded Symbols—Feedback and Incremental Adaptation in Human–Agent Dialogue , 2013, KI - Künstliche Intelligenz.

[14]  Takayuki Kanda,et al.  Footing in human-robot conversations: How robots might shape participant roles using gaze cues , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[15]  Herbert H. Clark,et al.  Grounding in communication , 1991, Perspectives on socially shared cognition.

[16]  Sara B. Kiesler,et al.  Fostering common ground in human-robot interaction , 2005, ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005..

[17]  Mark Steedman,et al.  Combinatory Categorial Grammar , 2011 .

[18]  Changsong Liu,et al.  Towards Mediating Shared Perceptual Basis in Situated Dialogue , 2012, SIGDIAL Conference.

[19]  Sven Wachsmuth,et al.  On Grounding Natural Kind Terms in Human-Robot Communication , 2013, KI - Künstliche Intelligenz.

[20]  Michael F. Schober,et al.  Spatial Dialogue between Partners with Mismatched Abilities , 2009, Spatial Language and Dialogue.

[21]  References , 1971 .

[22]  Changsong Liu,et al.  Modeling Collaborative Referring for Situated Referential Grounding , 2013, SIGDIAL Conference.

[23]  Candace L. Sidner,et al.  Explorations in engagement for humans and robots , 2005, Artif. Intell..

[24]  付伶俐 打磨Using Language,倡导新理念 , 2014 .

[25]  Joyce Yue Chai,et al.  Integrating word acquisition and referential grounding towards physical world interaction , 2012, ICMI '12.

[26]  Robert Stalnaker,et al.  Common Ground , 2002 .

[27]  Scott Thomas,et al.  Using vision, acoustics, and natural language for disambiguation , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[28]  Matthew W. Crocker,et al.  Visual attention in spoken human-robot interaction , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[29]  Matthias Scheutz,et al.  Incremental natural language processing for HRI , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[30]  Robert E. Kraut,et al.  Using Visual Information for Grounding and Awareness in Collaborative Tasks , 2012, Hum. Comput. Interact..

[31]  Susan L. Epstein,et al.  Learning to Balance Grounding Rationales for Dialogue Systems , 2011, SIGDIAL Conference.