A Discriminative Approach to Grounded Spoken Language Understanding in Interactive Robotics

Spoken Language Understanding in Interactive Robotics provides computational models of human-machine communication based on the vocal input. However, robots operate in specific environments and the correct interpretation of the spoken sentences depends on the physical, cognitive and linguistic aspects triggered by the operational environment. Grounded language processing should exploit both the physical constraints of the context as well as knowledge assumptions of the robot. These include the subjective perception of the environment that explicitly affects linguistic reasoning. In this work, a standard linguistic pipeline for semantic parsing is extended toward a form of perceptually informed natural language processing that combines discriminative learning and distributional semantics. Empirical results achieve up to a 40% of relative error reduction.

[1]  Sueo Ueno Artificial life and robotics , 2006, Artificial Life and Robotics.

[2]  Roberto Basili,et al.  Effective and Robust Natural Language Understanding for Human-Robot Interaction , 2014, ECAI.

[3]  Joachim Hertzberg,et al.  Towards semantic maps for mobile robots , 2008, Robotics Auton. Syst..

[4]  Julie C. Sedivy,et al.  Subject Terms: Linguistics Language Eyes & eyesight Cognition & reasoning , 1995 .

[5]  Stefanie Tellex,et al.  Toward understanding natural language directions , 2010, HRI 2010.

[6]  Luke S. Zettlemoyer,et al.  Learning to Parse Natural Language Commands to a Robot Control System , 2012, ISER.

[7]  Luke S. Zettlemoyer,et al.  A Joint Model of Language and Perception for Grounded Attribute Learning , 2012, ICML.

[8]  R. Lathe Phd by thesis , 1988, Nature.

[9]  Matthew R. Walter,et al.  Approaching the Symbol Grounding Problem with Probabilistic Graphical Models , 2011, AI Mag..

[10]  Daniele Nardi,et al.  On-line semantic mapping , 2013, 2013 16th International Conference on Advanced Robotics (ICAR).

[11]  Thomas Hofmann,et al.  Hidden Markov Support Vector Machines , 2003, ICML.

[12]  Jayant Krishnamurthy,et al.  Jointly Learning to Parse and Perceive: Connecting Natural Language to the Physical World , 2013, TACL.

[13]  George A. Miller,et al.  WordNet: A Lexical Database for English , 1995, HLT.

[14]  Johan Bos,et al.  A spoken language interface with a mobile robot , 2006, Artificial Life and Robotics.

[15]  Peter Stone,et al.  Learning to Interpret Natural Language Commands through Human-Robot Dialog , 2015, IJCAI.

[16]  Roberto Basili,et al.  KeLP: a Kernel-based Learning Platform for Natural Language Processing , 2015, ACL.

[17]  John B. Lowe,et al.  The Berkeley FrameNet Project , 1998, ACL.

[18]  Charles J. Fillmore,et al.  Frames and the semantics of understanding , 1985 .

[19]  Roberto Basili,et al.  Using Semantic Models for Robust Natural Language Human Robot Interaction , 2015, AI*IA.

[20]  Giuseppe Castellucci,et al.  Structured learning for semantic role labeling , 2011, Intelligenza Artificiale.

[21]  Henrik I. Christensen,et al.  Situated Dialogue and Spatial Organization: What, Where… and Why? , 2007 .

[22]  Lindsay Kleeman,et al.  Interactive SLAM using Laser and Advanced Sonar , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[23]  Roberto Basili,et al.  HuRIC: a Human Robot Interaction Corpus , 2014, LREC.

[24]  Magnus Sahlgren,et al.  The Word-Space Model: using distributional analysis to represent syntagmatic and paradigmatic relations between words in high-dimensional vector spaces , 2006 .

[25]  Francesco Benozzo,et al.  Quaderni di Semantica , 2010 .

[26]  Jeffrey Dean,et al.  Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.

[27]  Frédéric Kaplan,et al.  Talking AIBO : First Experimentation of Verbal Interactions with an Autonomous Four-legged Robot , 2000 .

[28]  Raymond J. Mooney,et al.  Learning to Interpret Natural Language Navigation Instructions from Observations , 2011, Proceedings of the AAAI Conference on Artificial Intelligence.