A First-Order-Logic Based Model for Grounded Language Learning

Much is still unknown about how children learn language, but it is clear that they perform “grounded” language learning: they learn the grammar and vocabulary not just from examples of sentences, but from examples of sentences in a particular context. Grounded language learning has been the subject of much research. Most of this work focuses on particular aspects, such as constructing semantic parsers, or on particular types of applications. In this paper, we take a broader view that includes an aspect that has received little attention until now: learning the meaning of phrases from phrase/context pairs in which the phrase’s meaning is not explicitly represented. We propose a simple model for this task that uses first-order logic representations for contexts and meanings, including a simple incremental learning algorithm. We experimentally demonstrate that the proposed model can explain the gradual learning of simple concepts and language structure, and that it can easily be used for interpretation, generation, and translation of phrases.

[1]  Jacob Andreas,et al.  Semantics-Based Machine Translation with Hyperedge Replacement Grammars , 2012, COLING.

[2]  Raymond J. Mooney,et al.  Generative Alignment and Semantic Parsing for Learning from Ambiguous Supervision , 2010, COLING.

[3]  Leonor Becerra-Bonache,et al.  Effects of Meaning-Preserving Corrections on Language Learning , 2011, CoNLL.

[4]  Raymond J. Mooney,et al.  Learning to sportscast: a test of grounded language acquisition , 2008, ICML '08.

[5]  Mark Steedman,et al.  A Probabilistic Model of Syntactic and Semantic Acquisition from Child-Directed Utterances and their Meanings , 2012, EACL.

[6]  J. W. Lloyd,et al.  Foundations of logic programming; (2nd extended ed.) , 1987 .

[7]  Daniel Marcu,et al.  Using Syntax-Based Machine Translation to Parse English into Abstract Meaning Representation , 2015, ArXiv.

[8]  Emiel Krahmer,et al.  Computational Generation of Referring Expressions: A Survey , 2012, CL.

[9]  Colin de la Higuera,et al.  Grammatical Inference: Learning Automata and Grammars , 2010 .

[10]  Andreas Stolcke,et al.  Miniature Language Acquisition: A Touchstone for Cognitive Science , 2002 .

[11]  Luke S. Zettlemoyer,et al.  Learning Context-Dependent Mappings from Sentences to Logical Form , 2009, ACL.

[12]  J. Siskind A computational study of cross-situational techniques for learning word-to-meaning mappings , 1996, Cognition.

[13]  Luc De Raedt,et al.  Inductive Logic Programming: Theory and Methods , 1994, J. Log. Program..

[14]  Hwee Tou Ng,et al.  A Generative Model for Parsing Natural Language to Meaning Representations , 2008, EMNLP.

[15]  Raymond J. Mooney,et al.  Training a Multilingual Sportscaster: Using Perceptual Context to Learn Language , 2014, J. Artif. Intell. Res..

[16]  Raymond J. Mooney,et al.  Learning Synchronous Grammars for Semantic Parsing with Lambda Calculus , 2007, ACL.

[17]  Rohit J. Kate,et al.  Learning Language Semantics from Ambiguous Supervision , 2007, AAAI.

[18]  Gordon Plotkin,et al.  A Note on Inductive Generalization , 2008 .