Grounding Compositional Hypothesis Generation in Specific Instances

A number of recent computational models treat concept learning as a form of probabilistic rule induction in a space of language-like, compositional concepts. Inference in such models frequently requires repeatedly sampling from a (infinite) distribution over possible concept rules and comparing their relative likelihood in light of current data or evidence. However, we argue that most existing algorithms for top-down sampling are inefficient and cognitively implausible accounts of human hypothesis generation. As a result, we propose an alternative, Instance Driven Generator (IDG), that constructs bottom-up hypotheses directly out of encountered positive instances of a concept. Using a novel rule induction task based on the children’s game Zendo, we compare these “bottomup” and “top-down” approaches to inference. We find that the bottom-up IDG model accounts better for human inferences and results in a computationally more tractable inference mechanism for concept learning models based on a probabilistic language of thought.

[1]  Joshua B. Tenenbaum,et al.  Error-Driven Stochastic Search for Theories and Concepts , 2014, CogSci.

[2]  W. K. Hastings,et al.  Monte Carlo Sampling Methods Using Markov Chains and Their Applications , 1970 .

[3]  Noah D. Goodman,et al.  The logical primitives of thought: Empirical foundations for compositional cognitive models. , 2016, Psychological review.

[4]  L. Schulz Finding new facts; thinking new thoughts. , 2012, Advances in child development and behavior.

[5]  Noah D. Goodman,et al.  Concepts in a Probabilistic Language of Thought , 2014 .

[6]  G. Bower,et al.  Attention in Learning: Theory and Research , 1968 .

[7]  R. Nosofsky,et al.  Rule-plus-exception model of classification learning. , 1994, Psychological review.

[8]  A. Church A Set of Postulates for the Foundation of Logic , 1932 .

[9]  Joshua B. Tenenbaum,et al.  Human-level concept learning through probabilistic program induction , 2015, Science.

[10]  Jessica B. Hamrick,et al.  psiTurk: An open-source framework for conducting replicable behavioral experiments online , 2016, Behavior research methods.

[11]  S. Gershman,et al.  Where do hypotheses come from? , 2017, Cognitive Psychology.

[12]  Thomas L. Griffiths,et al.  Formalizing Neurath’s Ship: Approximate Algorithms for Online Causal Learning , 2016, Psychological review.

[13]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[14]  Ryszard S. Michalski,et al.  On the Quasi-Minimal Solution of the General Covering Problem , 1969 .

[15]  J. Klayman,et al.  Confirmation, Disconfirmation, and Informa-tion in Hypothesis Testing , 1987 .

[16]  R. Shepard,et al.  Learning and memorization of classifications. , 1961 .

[17]  Jacob Feldman,et al.  Minimization of Boolean complexity in human concept learning , 2000, Nature.

[18]  Thomas L. Griffiths,et al.  A Rational Analysis of Rule-Based Concept Learning , 2008, Cogn. Sci..