Rapid trial-and-error learning with simulation supports flexible tool use and physical reasoning

Many animals, and an increasing number of artificial agents, display sophisticated capabilities to perceive and manipulate objects. But human beings remain distinctive in their capacity for flexible, creative tool use—using objects in new ways to act on the world, achieve a goal, or solve a problem. To study this type of general physical problem solving, we introduce the Virtual Tools game. In this game, people solve a large range of challenging physical puzzles in just a handful of attempts. We propose that the flexibility of human physical problem solving rests on an ability to imagine the effects of hypothesized actions, while the efficiency of human search arises from rich action priors which are updated via observations of the world. We instantiate these components in the “sample, simulate, update” (SSUP) model and show that it captures human performance across 30 levels of the Virtual Tools game. More broadly, this model provides a mechanism for explaining how people condense general physical knowledge into actionable, task-specific plans to achieve flexible and efficient physical problem solving.

[1]  K. J. Craik,et al.  The nature of explanation , 1944 .

[2]  R. C. Macridis A review , 1963 .

[3]  Allen Newell,et al.  Human Problem Solving. , 1973 .

[4]  K. Holyoak,et al.  Analogical problem solving , 1980, Cognitive Psychology.

[5]  B. Beck Animal Tool Behavior: The Use and Manufacture of Tools by Animals , 1980 .

[6]  W H Warren,et al.  The Way the Ball Bounces: Visual and Auditory Perception of Elasticity and Control of the Bounce Pass , 1987, Perception.

[7]  Manfred Morari,et al.  Model predictive control: Theory and practice , 1988 .

[8]  Manfred Morari,et al.  Model predictive control: Theory and practice - A survey , 1989, Autom..

[9]  Richard S. Sutton,et al.  Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming , 1990, ML.

[10]  Richard S. Sutton,et al.  Dyna, an integrated architecture for learning, planning, and reacting , 1990, SGAR.

[11]  John R. Anderson Problem solving and learning. , 1993 .

[12]  P. Frensch,et al.  Complex problem solving : the European perspective , 1995 .

[13]  Michael I. Jordan,et al.  An internal model for sensorimotor integration. , 1995, Science.

[14]  Jürgen Schmidhuber,et al.  Reinforcement Learning with Self-Modifying Policies , 1998, Learning to Learn.

[15]  M. Tomasello The Cultural Origins of Human Cognition , 2000 .

[16]  William M. Fields,et al.  The Cultural Origins of Human Cognition. , 2000 .

[17]  Tamer Basar,et al.  Dual Control Theory , 2001 .

[18]  Ronald J. Williams,et al.  Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.

[19]  A. Yuille,et al.  Object perception as Bayesian inference. , 2004, Annual review of psychology.

[20]  E. Higgins,et al.  Where Does Value Come From , 2008 .

[21]  G. Goldenberg,et al.  The neural basis of tool use. , 2009, Brain : a journal of neurology.

[22]  Leslie Pack Kaelbling,et al.  Hierarchical Planning in the Now , 2010, Bridging the Gap Between Task and Motion Planning.

[23]  Lydia M. Hopper,et al.  Observational learning of tool use in children: Investigating cultural spread through diffusion chains and learning mechanisms through ghost displays. , 2010, Journal of experimental child psychology.

[24]  Carl E. Rasmussen,et al.  PILCO: A Model-Based and Data-Efficient Approach to Policy Search , 2011, ICML.

[25]  James N. MacGregor,et al.  Human Performance on Insight Problem Solving: A Review , 2011, J. Probl. Solving.

[26]  Jackie Chappell,et al.  Making tools isn’t child’s play , 2011, Cognition.

[27]  Leslie Pack Kaelbling,et al.  Hierarchical task and motion planning in the now , 2011, 2011 IEEE International Conference on Robotics and Automation.

[28]  K. Vaesen The cognitive bases of human tool use , 2012, Behavioral and Brain Sciences.

[29]  Kevin A. Smith,et al.  Sources of uncertainty in intuitive physics , 2012, CogSci.

[30]  Jessica B. Hamrick,et al.  Simulation as an engine of physical scene understanding , 2013, Proceedings of the National Academy of Sciences.

[31]  Jan Peters,et al.  A Survey on Policy Search for Robotics , 2013, Found. Trends Robotics.

[32]  Gideon Keren,et al.  A Tale of Two Systems , 2013, Perspectives on psychological science : a journal of the Association for Psychological Science.

[33]  Vikash K. Mansinghka,et al.  Reconciling intuitive physics and Newtonian mechanics for colliding objects. , 2013, Psychological review.

[34]  Alex H. Taylor,et al.  Using the Aesop's Fable Paradigm to Investigate Causal Understanding of Water Displacement by New Caledonian Crows , 2014, PloS one.

[35]  Guy A. Orban,et al.  The neural basis of human tool use , 2014, Front. Psychol..

[36]  A. Markman,et al.  Journal of Experimental Psychology : General Retrospective Revaluation in Sequential Decision Making : A Tale of Two Systems , 2012 .

[37]  Joshua B. Tenenbaum,et al.  How, whether, why: Causal judgments as counterfactual contrasts , 2015, CogSci.

[38]  J. Henrich The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter , 2015 .

[39]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[40]  Marc G. Bellemare,et al.  The Arcade Learning Environment: An Evaluation Platform for General Agents , 2012, J. Artif. Intell. Res..

[41]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[42]  Jae Hee Lee,et al.  Hole in One: Using Qualitative Reasoning for Solving Hard Physical Puzzle Problems , 2016, ECAI.

[43]  Alejandra Pascual-Garrido,et al.  Wild capuchin monkeys adjust stone tools according to changing nut properties , 2016, Scientific Reports.

[44]  Sergey Levine,et al.  One-shot learning of manipulation skills with online dynamics adaptation and neural network priors , 2015, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[45]  Sergey Levine,et al.  Model-based reinforcement learning with parametrized physical models and optimism-driven exploration , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[46]  Razvan Pascanu,et al.  Interaction Networks for Learning about Objects, Relations and Physics , 2016, NIPS.

[47]  François Osiurak,et al.  Tool use and affordance: Manipulation-based versus reasoning-based approaches. , 2016, Psychological review.

[48]  Sergey Levine,et al.  End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..

[49]  Jessica B. Hamrick,et al.  psiTurk: An open-source framework for conducting replicable behavioral experiments online , 2016, Behavior research methods.

[50]  Wojciech Zaremba,et al.  OpenAI Gym , 2016, ArXiv.

[51]  Razvan Pascanu,et al.  Imagination-Augmented Agents for Deep Reinforcement Learning , 2017, NIPS.

[52]  Samuel Gershman,et al.  Imaginative Reinforcement Learning: Computational Principles and Neural Mechanisms , 2017, Journal of Cognitive Neuroscience.

[53]  Joshua B. Tenenbaum,et al.  A Compositional Object-Based Approach to Learning Physical Dynamics , 2016, ICLR.

[54]  Sergey Levine,et al.  Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.

[55]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[56]  Marc Toussaint,et al.  Differentiable Physics and Stable Modes for Tool-Use and Manipulation Planning , 2018, Robotics: Science and Systems.

[57]  Joshua B. Tenenbaum,et al.  Learning to act by integrating mental simulations and physical experiments , 2018, bioRxiv.

[58]  Noah D. Goodman,et al.  Learning physical parameters from dynamic scenes , 2018, Cognitive Psychology.

[59]  Jiajun Wu,et al.  Neurocomputational Modeling of Human Physical Scene Understanding , 2018 .

[60]  Jessica B. Hamrick,et al.  Relational inductive bias for physical construction in humans and machines , 2018, CogSci.

[61]  Joel Z. Leibo,et al.  Prefrontal cortex as a meta-reinforcement learning system , 2018, bioRxiv.

[62]  Neil R. Bramley,et al.  Intuitive experimentation in the physical world , 2018, Cognitive Psychology.

[63]  Erik Talvitie,et al.  The Effect of Planning Shape on Dyna-style Planning in High-dimensional State Spaces , 2018, ArXiv.

[64]  Silvio Savarese,et al.  Learning task-oriented grasping for tool manipulation from simulated self-supervision , 2018, Robotics: Science and Systems.

[65]  J. Randall Flanagan,et al.  Multiple motor memories are learned to control different points on a tool , 2018, Nature Human Behaviour.

[66]  John Schulman,et al.  Gotta Learn Fast: A New Benchmark for Generalization in RL , 2018, ArXiv.

[67]  Jim Fleming,et al.  Reasoning and Generalization in RL: A Tool Use Perspective , 2019, ArXiv.

[68]  Tania Lombrozo,et al.  “Learning by Thinking” in Science and in Everyday Life , 2020 .

[69]  Patrick van der Smagt,et al.  Switching Linear Dynamics for Variational Bayes Filtering , 2019, ICML.

[70]  Jessica B. Hamrick,et al.  Structured agents for physical construction , 2019, ICML.

[71]  Sergey Levine,et al.  Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables , 2019, ICML.

[72]  Ross B. Girshick,et al.  PHYRE: A New Benchmark for Physical Reasoning , 2019, NeurIPS.

[73]  Sergey Levine,et al.  Reasoning About Physical Interactions with Object-Oriented Prediction and Planning , 2018, ICLR.

[74]  Alexei A. Efros,et al.  Time-Agnostic Prediction: Predicting Predictable Video Frames , 2018, ICLR.

[75]  Joshua B. Tenenbaum,et al.  The Tools Challenge: Rapid Trial-and-Error Learning in Physical Problem Solving , 2019, CogSci.

[76]  Sergey Levine,et al.  Improvisation through Physical Understanding: Using Novel Objects as Tools with Visual Foresight , 2019, Robotics: Science and Systems.

[77]  Taehoon Kim,et al.  Quantifying Generalization in Reinforcement Learning , 2018, ICML.

[78]  C. Summerfield,et al.  Where Does Value Come From? , 2019, Trends in Cognitive Sciences.

[79]  Ruben Villegas,et al.  Learning Latent Dynamics for Planning from Pixels , 2018, ICML.

[80]  Sergey Levine,et al.  Model-Based Reinforcement Learning for Atari , 2019, ICLR.

[81]  Igor Mordatch,et al.  Emergent Tool Use From Multi-Agent Autocurricula , 2019, ICLR.

[82]  Keyframing the Future: Keyframe Discovery for Visual Prediction and Planning , 2019, L4DC.

[83]  Oliver Kroemer,et al.  A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms , 2019, J. Mach. Learn. Res..