Learning as program induction

This workshop will cover new work that casts human learning as program induction — i.e. learning of programs from data. The notion that the mind approximates rational (Bayesian) inference has had a strong influence on thinking in psychology since the 1950s. In constrained scenarios, typical of psychology experiments, people often behave in ways that approximate the dictates of probability theory. However, natural learning contexts are typically much more open-ended — there are often no clear limits on what is possible, and initial proposals often prove inadequate. This means that coming up with the right hypotheses and theories in the first place is often much harder than ruling among them. How do people, and how can machines, expand their hypothesis spaces to generate wholly new ideas, plans and solutions? Recent work has begun to shed light on this problem via the idea that many aspects of learning can be better understood through the mathematics of program induction (Chater & Oaksford, 2013; Lake, Salakhutdinov, & Tenenbaum, 2015). People are demonstrably able to compose hypotheses from parts (Goodman, Tenenbaum, Feldman, & Griffiths, 2008; Piantadosi, Tenenbaum, & Goodman, 2016; Schulz, Tenenbaum, Duvenaud, Speekenbrink, & Gershman, 2017) and incrementally grow and adapt their models of the world (Bramley, Dayan, Griffiths, & Lagnado, 2017). A number of recent studies has formalized these abilities as program induction, using algorithms that mix stochastic recombination of primitives with memoization and compression to explain data (Dechter, Malmaud, Adams, & Tenenbaum, 2013; Ellis, Dechter, & Tenenbaum, 2015; Romano, Salles, Amalric, Dehaene, Sigman, & Figueria, 2017), ask informative questions (Rothe, Lake, & Gureckis, 2017), and support oneand few-shot-inferences (Lake et al., 2015). Program induction is also proving to be an important notion for understanding development and learning through play (Sim & Xu, 2017) and the formation of geometric understanding about the physical world (Amalric, Wang, Pica, Figueira, Sigman, & Dehaene, 2017).

[1]  Mariano Sigman,et al.  The language of geometry: Fast comprehension of geometrical primitives and rules in human adults and preschoolers , 2017, PLoS Comput. Biol..

[2]  Thomas L. Griffiths,et al.  Formalizing Neurath’s Ship: Approximate Algorithms for Online Causal Learning , 2016, Psychological review.

[3]  Thomas L. Griffiths,et al.  A Rational Analysis of Rule-Based Concept Learning , 2008, Cogn. Sci..

[4]  Fei Xu,et al.  Learning Higher-Order Generalizations Through Free Play: Evidence From 2- and 3-Year-Old Children , 2017, Developmental psychology.

[5]  Samuel J. Gershman,et al.  Compositional Inductive Biases in Function Learning , 2016, bioRxiv.

[6]  Nick Chater,et al.  Programs as Causal Models: Speculations on Mental Programs and Mental Representation , 2013, Cogn. Sci..

[7]  Noah D. Goodman,et al.  The logical primitives of thought: Empirical foundations for compositional cognitive models. , 2016, Psychological review.

[8]  Joshua B. Tenenbaum,et al.  Dimensionality Reduction via Program Induction , 2015, AAAI Spring Symposia.

[9]  Joshua B. Tenenbaum,et al.  Bootstrap Learning via Modular Concept Discovery , 2013, IJCAI.

[10]  Todd M. Gureckis,et al.  Question Asking as Program Generation , 2017, NIPS.

[11]  Joshua B. Tenenbaum,et al.  Human-level concept learning through probabilistic program induction , 2015, Science.

[12]  William M. Smith,et al.  A Study of Thinking , 1956 .

[13]  S. Dehaene,et al.  Bayesian selection of grammar productions for the language of thought , 2017, bioRxiv.