Recovering Quantitative Models of Human Information Processing with Differentiable Architecture Search

The integration of behavioral phenomena into mechanistic models of cognitive function is a fundamental staple of cognitive science. Yet, researchers are beginning to accumulate increasing amounts of data without having the temporal or monetary resources to integrate these data into scientific theories. We seek to overcome these limitations by incorporating existing machine learning techniques into an open-source pipeline for the automated construction of quantitative models. This pipeline leverages the use of neural architecture search to automate the discovery of interpretable model architectures, and automatic differentiation to automate the fitting of model parameters to data. We evaluate the utility of these methods based on their ability to recover quantitative models of human information processing from synthetic data. We find that these methods are capable of recovering basic quantitative motifs from models of psychophysics, learning and decision making. We also highlight weaknesses of this framework, and discuss future directions for their mitigation.

[1]  Andrew M. Saxe,et al.  On the Rational Boundedness of Cognitive Control: Shared Versus Separated Representations , 2020 .

[2]  Ameet Talwalkar,et al.  Random Search and Reproducibility for Neural Architecture Search , 2019, UAI.

[3]  Scott D. Brown,et al.  The power law repealed: The case for an exponential law of practice , 2000, Psychonomic bulletin & review.

[4]  Vivek Srikumar,et al.  SweetPea: A standard language for factorial experimental design , 2020, Behavior Research Methods.

[5]  Frank Hutter,et al.  Neural Architecture Search: A Survey , 2018, J. Mach. Learn. Res..

[6]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[7]  Aaron Klein,et al.  Towards Automatically-Tuned Neural Networks , 2016, AutoML@ICML.

[8]  Kaiyong Zhao,et al.  AutoML: A Survey of the State-of-the-Art , 2019, Knowl. Based Syst..

[9]  Yiming Yang,et al.  DARTS: Differentiable Architecture Search , 2018, ICLR.

[10]  Max Tegmark,et al.  AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity , 2020, NeurIPS.

[11]  L. L. Thurstone,et al.  The learning curve equation , 1919 .

[12]  Marius Lindauer,et al.  Best Practices for Scientific Research on Neural Architecture Search , 2019, ArXiv.

[13]  Bo Zhang,et al.  Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search , 2020, ECCV.

[14]  James L. McClelland,et al.  Semantic Cognition: A Parallel Distributed Processing Approach , 2004 .

[15]  Philipp Slusallek,et al.  Introduction to real-time ray tracing , 2005, SIGGRAPH Courses.

[16]  Griewank,et al.  On automatic differentiation , 1988 .

[17]  Birte U. Forstmann,et al.  Parameter recovery for the Leaky Competing Accumulator model , 2017 .

[18]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[19]  Kaiming He,et al.  Exploring Randomly Wired Neural Networks for Image Recognition , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[20]  James L. McClelland,et al.  The time course of perceptual choice: the leaky, competing accumulator model. , 2001, Psychological review.