Theories or fragments?

Lake et al. argue persuasively that modelling human-like intelligence requires flexible, compositional representations in order to embody world knowledge. But human knowledge is too sparse and self-contradictory to be embedded in "intuitive theories." We argue, instead, that knowledge is grounded in exemplar-based learning and generalization, combined with high flexible generalization, a viewpoint compatible both with non-parametric Bayesian modelling and with sub-symbolic methods such as neural networks.