Using semantic fields to model dynamic spatial relations in a robot architecture for natural language instruction of service robots

We present a methodology for enabling service robots to follow natural language commands from non-expert users, with and without user-specified constraints, with a particular focus on spatial language understanding. As part of our approach, we propose a novel extension to the semantic field model of spatial prepositions that enables the representation of dynamic spatial relations involving paths. The design, system modules, and implementation details of our robot software architecture are presented and the relevance of the proposed methodology to interactive instruction and task modification through the addition of constraints is discussed. The paper concludes with an evaluation of our robot software architecture implemented on a simulated mobile robot operating in both a 2D home environment and in real world environment maps to demonstrate the generalizability and usefulness of our approach in real world applications.

[1]  John D. Kelleher,et al.  Applying Computational Models of Spatial Prepositions to Visually Situated Dialog , 2009, CL.

[2]  Wolfram Burgard,et al.  The dynamic window approach to collision avoidance , 1997, IEEE Robotics Autom. Mag..

[3]  Deb K. Roy,et al.  Learning visually grounded words and syntax for a scene description task , 2002, Comput. Speech Lang..

[4]  B. Landau,et al.  “What” and “where” in spatial language and spatial cognition , 1993 .

[5]  Ioannis Iossifidis,et al.  Natural human-robot interaction through spatial language: A Dynamic Neural Field approach , 2010, 19th International Symposium in Robot and Human Interactive Communication.

[6]  John D. Kelleher,et al.  Towards a Cognitive System that Can Recognize Spatial Regions Based on Context , 2012, AAAI.

[7]  Hadas Kress-Gazit,et al.  Translating Structured English to Robot Controllers , 2008, Adv. Robotics.

[8]  Matthias Scheutz,et al.  Learning actions from human-robot dialogues , 2011, 2011 RO-MAN.

[9]  Gordon D. Logan,et al.  A computational analysis of the apprehension of spatial relations , 1996 .

[10]  Jürgen Bohnemeyer,et al.  The unique vector constraint: The impact of direction changes on the linguistic segmentation of motion events. , 2003 .

[11]  Klaus-Peter Gapp Basic Meanings of Spatial Relations: Computation and Evaluation in 3D Space , 1994, AAAI.

[12]  Manuela M. Veloso,et al.  Using dialog and human observations to dictate tasks to a learning robot assistant , 2008, Intell. Serv. Robotics.

[13]  Matthias Scheutz,et al.  Toward Humanlike Task-Based Dialogue Processing for Human Robot Interaction , 2011, AI Mag..

[14]  Luke S. Zettlemoyer,et al.  Learning to Parse Natural Language Commands to a Robot Control System , 2012, ISER.

[15]  J O'Keefe,et al.  Vector grammar, places, and the functional role of the spatial prepositions in English , 2001 .

[16]  Marjorie Skubic,et al.  Spatial language for human-robot dialogs , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[17]  Laura A. Carlson,et al.  Formulating Spatial Descriptions across Various Dialogue Contexts , 2009, Spatial Language and Dialogue.

[18]  Dan Klein,et al.  Accurate Unlexicalized Parsing , 2003, ACL.

[19]  Stefanie Tellex,et al.  Toward understanding natural language directions , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[20]  Matthew R. Walter,et al.  Approaching the Symbol Grounding Problem with Probabilistic Graphical Models , 2011, AI Mag..

[21]  Maja J. Mataric,et al.  Modeling dynamic spatial relations with global properties for natural language-based human-robot interaction , 2013, 2013 IEEE RO-MAN.

[22]  John E. Laird,et al.  Learning Grounded Language through Situated Interactive Instruction , 2012, AAAI Fall Symposium: Robots Learning Interactively from Human Teachers.