Temporal Spatial Inverse Semantics for Robots Communicating with Humans

Effective communication between humans often embeds both temporal and spatial context. While spatial context captures the geographic settings of objects in the environment, temporal context describes their changes over time. In this paper, we propose temporal spatial inverse semantics (TeSIS) to extend the inverse semantics approach to also consider the temporal context for robots communicating with humans. Inverse semantics generates natural language requests while taking into account how well the human listeners would interpret those requests given the current spatial context. Compared to inverse semantics, our approach incorporates also temporal context by referring to spatial context information in the past. To achieve this, we extend the sentence structure in inverse semantics to generate sentences that can refer to not only the current but also previous states of the environment. A new metric based on the extended sentence structure is developed by breaking a single sentence into multiple independent sentences that refer to environment states at different times. Using this approach, we are able to generate sentences such as “Please pick up the cup beside the oven that was on the dining table”. To evaluate our approach, we randomly generate scenarios in an experimental domain. Each scenario includes the description of the current and several immediate previous states. Natural language sentences are then generated for these scenarios using both inverse semantics that uses only the spatial context and our approach. Amazon MTurk is used to compare the sentences generated and results show that TeSIS achieves better accuracy, sometimes by a significant margin, than the baseline.

[1]  S. Kambhampati,et al.  Plan Explicability for Robot Task Planning , 2010 .

[2]  Luke S. Zettlemoyer,et al.  Learning to Parse Natural Language Commands to a Robot Control System , 2012, ISER.

[3]  Joyce Yue Chai,et al.  Collaborative Models for Referring Expression Generation in Situated Dialogue , 2014, AAAI.

[4]  James H. Martin,et al.  Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition, 2nd Edition , 2000, Prentice Hall series in artificial intelligence.

[5]  Oliver Kroemer,et al.  Interaction primitives for human-robot cooperation tasks , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Yu Zhang,et al.  Explanation Generation as Model Reconciliation in Multi-Model Planning , 2017, ArXiv.

[7]  Stefanie Tellex,et al.  Toward Information Theoretic Human-Robot Dialog , 2012, Robotics: Science and Systems.

[8]  Anca D. Dragan,et al.  Cooperative Inverse Reinforcement Learning , 2016, NIPS.

[9]  Ross A. Knepper,et al.  Implicit Communication in a Joint Action , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[10]  Yu Zhang,et al.  AI Challenges in Human-Robot Cognitive Teaming , 2017, ArXiv.

[11]  Stefanie Tellex,et al.  Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities , 2017, Robotics: Science and Systems.

[12]  Ehud Reiter,et al.  Book Reviews: Building Natural Language Generation Systems , 2000, CL.

[13]  Matthew R. Walter,et al.  Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation , 2011, AAAI.

[14]  Ze Gong,et al.  Robot Signaling its Intentions in Human-Robot Teaming , 2018 .

[15]  Peter Stone,et al.  Learning Multi-Modal Grounded Linguistic Semantics by Playing "I Spy" , 2016, IJCAI.

[16]  Matthew R. Walter,et al.  Efficient Natural Language Interfaces for Assistive Robots , 2014, IROS 2014.

[17]  Siddhartha S. Srinivasa,et al.  Generating Legible Motion , 2013, Robotics: Science and Systems.

[18]  Yu Zhang,et al.  Planning with Resource Conflicts in Human-Robot Cohabitation , 2016, AAMAS.

[19]  Joyce Yue Chai,et al.  Embodied Collaborative Referring Expression Generation in Situated Human-Robot Interaction , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[20]  Stefanie Tellex,et al.  Toward understanding natural language directions , 2010, HRI 2010.

[21]  Dan Klein,et al.  Grounding spatial relations for human-robot interaction , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[22]  Nicholas Roy,et al.  Efficient Grounding of Abstract Spatial Concepts for Natural Language Interaction with Robot Manipulators , 2016, Robotics: Science and Systems.

[23]  Ross A. Knepper,et al.  Asking for Help Using Inverse Semantics , 2014, Robotics: Science and Systems.

[24]  Kevin Lee,et al.  Tell me Dave: Context-sensitive grounding of natural language to manipulation instructions , 2014, Int. J. Robotics Res..

[25]  Yu Zhang,et al.  Plan explicability and predictability for robot task planning , 2015, 2017 IEEE International Conference on Robotics and Automation (ICRA).