Inferring Structured Visual Concepts from Minimal Data

Humans can learn and reason about abstract concepts quickly, flexibly, and often from very little data. Here, we study how people learn novel concepts within a binary grid domain, and find that even this minimal task nonetheless necessitates the inference of highly structured parts as well as their compositional relationships. Furthermore, by changing the presentation condition of the learning examples, we reveal different approaches involved in learning such visual concepts: given the same images, human generalizations differ between rapid and static presentation conditions. We investigate this difference by developing several computational models that vary in their use of structured primitives and composition. We find that learning in the rapid presentation condition is best described as inference in simple models, while learning in the static presentation condition is best described as inference in a more structured space of graphics programs.

[1]  Joshua B. Tenenbaum,et al.  Human-level concept learning through probabilistic program induction , 2015, Science.

[2]  Joshua B. Tenenbaum,et al.  Learning Structured Generative Concepts , 2010 .

[3]  Joshua B. Tenenbaum,et al.  Rules and Similarity in Concept Learning , 1999, NIPS.

[4]  R. Jacobs,et al.  Learning abstract visual concepts via probabilistic program induction in a Language of Thought , 2017, Cognition.

[5]  Richard N Aslin,et al.  Bayesian learning of visual chunks by human observers , 2008, Proceedings of the National Academy of Sciences.

[6]  J. Tenenbaum,et al.  Generalization, similarity, and Bayesian inference. , 2001, The Behavioral and brain sciences.

[7]  Aaron B. Hoffman,et al.  Thirty-something categorization results explained: selective attention, eyetracking, and models of category learning. , 2005, Journal of experimental psychology. Learning, memory, and cognition.

[8]  Joshua B. Tenenbaum,et al.  A Generative Theory of Similarity , 2005 .

[9]  E. Hellinger,et al.  Neue Begründung der Theorie quadratischer Formen von unendlichvielen Veränderlichen. , 1909 .

[10]  Thomas L. Griffiths,et al.  A Rational Analysis of Rule-Based Concept Learning , 2008, Cogn. Sci..

[11]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[12]  Joshua B. Tenenbaum,et al.  Probing the Compositionality of Intuitive Functions , 2016, NIPS.

[13]  R. Shepard,et al.  Learning and memorization of classifications. , 1961 .