Five Ways in Which Computational Modeling Can Help Advance Cognitive Science: Lessons From Artificial Grammar Learning

Abstract There is a rich tradition of building computational models in cognitive science, but modeling, theoretical, and experimental research are not as tightly integrated as they could be. In this paper, we show that computational techniques—even simple ones that are straightforward to use—can greatly facilitate designing, implementing, and analyzing experiments, and generally help lift research to a new level. We focus on the domain of artificial grammar learning, and we give five concrete examples in this domain for (a) formalizing and clarifying theories, (b) generating stimuli, (c) visualization, (d) model selection, and (e) exploring the hypothesis space.

[1]  Richard W. Hedley,et al.  Complexity, Predictability and Time Homogeneity of Syntax in the Songs of Cassin’s Vireo (Vireo cassinii) , 2016, PloS one.

[2]  R. Peereman,et al.  Do We Need Algebraic-Like Computations? A Reply to Bonatti, Pena, Nespor, and Mehler (2006). , 2006 .

[3]  Pierre Perruchet,et al.  A role for backward transitional probabilities in word segmentation? , 2008, Memory & cognition.

[4]  Willem H. Zuidema,et al.  Segmentation as Retention and Recognition: the R&R model , 2017, CogSci.

[5]  Jennifer Culbertson,et al.  Cognitive Biases, Linguistic Universals, and Constraint-Based Grammar Learning , 2013, Top. Cogn. Sci..

[6]  Remko Scha,et al.  How should we evaluate models of segmentation in artificial language learning , 2015 .

[7]  Robert M. French,et al.  TRACX 2.0: A memory-based, biologically-plausible model of sequence segmentation and chunk extraction , 2014, CogSci.

[8]  Michael C. Frank,et al.  Zipfian frequency distributions facilitate word segmentation in context , 2013, Cognition.

[9]  Willem H. Zuidema,et al.  Simple rules can explain discrimination of putative recursive syntactic structures by a songbird species , 2009, Proceedings of the National Academy of Sciences.

[10]  Denis Mareschal,et al.  TRACX: a recognition-based connectionist framework for sequence segmentation and chunk extraction. , 2011, Psychological review.

[11]  E. Wonnacott Balancing generalization and lexical conservatism: An artificial language study with child learners , 2011 .

[12]  Nick Chater,et al.  Phonology impacts segmentation in online speech processing , 2005 .

[13]  G. Claeskens Statistical Model Choice , 2016 .

[14]  Denis Mareschal,et al.  TRACX2: a connectionist autoencoder using graded chunks to model infant visual statistical learning , 2017, Philosophical Transactions of the Royal Society B: Biological Sciences.

[15]  J. Tenenbaum,et al.  The learnability of abstract syntactic principles , 2011, Cognition.

[16]  Jenny R. Saffran,et al.  Does Grammar Start Where Statistics Stop? , 2002, Science.

[17]  Scott P. Johnson,et al.  Statistical and Chunking Processes in Adults' Visual Sequence Learning , 2015, CogSci.

[18]  Noam Chomsky,et al.  The Sound Pattern of English , 1968 .

[19]  Pierre Perruchet,et al.  Correction to Perruchet et al. (2006). , 2006 .

[20]  Axel Cleeremans,et al.  From chicken squawking to cognition: Levels of description and the computational approach in psychology , 1996 .

[21]  Noah D. Goodman Learning and the language of thought , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[22]  Pierre Perruchet,et al.  "Do we need algebraic-like computations? A reply to Bonatti, Pena, Nespor, and Mehler ": Correction to Perruchet et al. , 2006 .

[23]  Naomi Feldman,et al.  Modeling Statistical Insensitivity: Sources of Suboptimal Behavior , 2017, Cogn. Sci..

[24]  John H. Holland,et al.  Tests on a cell assembly theory of the action of the brain, using a large digital computer , 1956, IRE Trans. Inf. Theory.

[25]  Willem Zuidema,et al.  A review of computational models of basic rule learning: The neural-symbolic debate and beyond , 2019, Psychonomic Bulletin & Review.

[26]  Christopher A. Mattson,et al.  Pareto Frontier Based Concept Selection Under Uncertainty, with Visualization , 2005 .

[27]  J. Tenenbaum,et al.  Bayesian Special Section Learning Overhypotheses with Hierarchical Bayesian Models , 2022 .

[28]  Robert C. Berwick,et al.  What do animals learn in artificial grammar studies? , 2017, Neuroscience & Biobehavioral Reviews.

[29]  Timothy O'Donnell,et al.  Productivity and Reuse in Language: A Theory of Linguistic Computation and Storage , 2015 .

[30]  Michael C. Frank,et al.  Modeling human performance in statistical word segmentation , 2010, Cognition.

[31]  K I Forster,et al.  The potential for experimenter bias effects in word recognition experiments , 2000, Memory & cognition.

[32]  S. Kirby,et al.  Compression and communication in the cultural evolution of linguistic structure , 2015, Cognition.

[33]  Marina Nespor,et al.  Signal-Driven Computations in Speech Processing , 2002, Science.

[34]  Peter M. Vishton,et al.  Rule learning by seven-month-old infants. , 1999, Science.

[35]  LouAnn Gerken,et al.  Infants use rational decision criteria for choosing among models of their input , 2010, Cognition.

[36]  Mark Steyvers,et al.  Online Learning Mechanisms for Bayesian Models of Word Segmentation , 2010 .

[37]  Jessica F. Hay,et al.  Learning in reverse: Eight-month-old infants track backward transitional probabilities , 2009, Cognition.

[38]  Leland McInnes,et al.  Accelerated Hierarchical Density Based Clustering , 2017, 2017 IEEE International Conference on Data Mining Workshops (ICDMW).

[39]  W. Fitch,et al.  More than one way to see it: Individual heuristics in avian visual computation , 2015, Cognition.

[40]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..