Zero-shot Learning of Classifiers from Natural Language Quantification

Humans can efficiently learn new concepts using language. We present a framework through which a set of explanations of a concept can be used to learn a classifier without access to any labeled examples. We use semantic parsing to map explanations to probabilistic assertions grounded in latent class labels and observed attributes of unlabeled data, and leverage the differential semantics of linguistic quantifiers (e.g., ‘usually’ vs ‘always’) to drive model training. Experiments on three domains show that the learned classifiers outperform previous approaches for learning with limited data, and are comparable with fully supervised classifiers trained from a small number of labeled examples.

[1]  Luke S. Zettlemoyer,et al.  Online Learning of Relaxed CCG Grammars for Parsing to Logical Form , 2007, EMNLP.

[2]  Joshua B. Tenenbaum,et al.  Human-level concept learning through probabilistic program induction , 2015, Science.

[3]  Maryellen C. MacDonald,et al.  Resolution of quantifier scope ambiguities , 1993, Cognition.

[4]  Regina Barzilay,et al.  Learning to Win by Reading Manuals in a Monte-Carlo Framework , 2011, ACL.

[5]  Stefanie Tellex,et al.  A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action-Oriented and Goal-Oriented Instructions , 2017, RoboNLP@ACL.

[6]  Dan Klein,et al.  Learning from measurements in exponential families , 2009, ICML '09.

[7]  A. Asuncion,et al.  UCI Machine Learning Repository, University of California, Irvine, School of Information and Computer Sciences , 2007 .

[8]  Tom M. Mitchell,et al.  Joint Concept Learning and Semantic Parsing from Natural Language Explanations , 2017, EMNLP.

[9]  Christoph H. Lampert,et al.  Attribute-Based Classification for Zero-Shot Visual Object Categorization , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  Ben Taskar,et al.  Posterior Regularization for Structured Latent Variable Models , 2010, J. Mach. Learn. Res..

[11]  Emmon W. Bach,et al.  Quantification in Natural Languages , 1995 .

[12]  Michael K. Tanenhaus,et al.  Linguistic Variability and Adaptation in Quantifier Meanings , 2013, CogSci.

[13]  Gideon S. Mann,et al.  Generalized Expectation Criteria for Semi-Supervised Learning with Weakly Labeled Data , 2010, J. Mach. Learn. Res..

[14]  Gideon S. Mann,et al.  Learning from labeled features using generalized expectation criteria , 2008, SIGIR '08.

[15]  Stephen E. Newstead,et al.  Context and the interpretation of quantifiers of frequency , 1987 .

[16]  J. Barwise,et al.  Generalized quantifiers and natural language , 1981 .

[17]  S. Lobner Quantification as a Major Module of Natural Language Semantics in Studies in Discourse Representation Theory and the Theory of Generalized Quantifiers. , 1987 .

[18]  Dan Klein,et al.  Learning with Latent Language , 2017, NAACL.

[19]  Ming-Wei Chang,et al.  Guiding Semi-Supervision with Constraint-Driven Learning , 2007, ACL.

[20]  Andrew McCallum,et al.  Alternating Projections for Learning with Expectation Constraints , 2009, UAI.

[21]  Sanja Fidler,et al.  Predicting Deep Zero-Shot Convolutional Neural Networks Using Textual Descriptions , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[22]  Pietro Perona,et al.  The Caltech-UCSD Birds-200-2011 Dataset , 2011 .

[23]  Lise Getoor,et al.  A short introduction to probabilistic soft logic , 2012, NIPS 2012.

[24]  Babak Saleh,et al.  Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions , 2013, 2013 IEEE International Conference on Computer Vision.

[25]  Dan Klein,et al.  Learning Dependency-Based Compositional Semantics , 2011, CL.

[26]  Anthony J. Sanford,et al.  Prior expectation and the interpretation of natural language quantifiers , 1993 .