Artificial grammar learning and neural networks

Artificial Grammar Learning and Neural Networks Karl Magnus Petersson (karl.magnus.petersson@fcdonders.ru.nl) F.C. Donders Centre for Cognitive Neuroimaging, Radboud University Nijmegen, The Netherlands CSI, Center for Intelligent Systems, Universidade do Algarve, Portugal Peter Grenholm (peter_grenholm@yahoo.se) Cognitive Neurophysiology Research Group, Karolinska Institutet 171 76 Stockholm, Sweden Christian Forkstam (christian.forkstam@cns.ki.se) Cognitive Neurophysiology Research Group, Karolinska Institutet 171 76 Stockholm, Sweden transition (Figure 1). The modified RM1 outputs an infinite symbol string: first an end-of-string symbol '#', then a Reber string, a ‘#’, a new string, and so on indefinitely. It turns out that to recognize all possible output strings of RM1, a necessary and sufficient condition is to know a set, TG, of 48 trigrams. A string is generated by RM1 if and only if the string starts with ‘#’ and only contains trigrams in TG. To show this, we observe that the Reber grammar yields exactly the same strings as RM2 (Figure 2). Assume that we know RM1 and that we know the latest two symbols in an output string from RM2. This determines the internal state of RM1 and RM2. The set of possible bigrams is {MT,MV,…,#V}. Abstract Recent FMRI studies indicate that language related brain regions are engaged in artificial grammar (AG) processing. In the present study we investigate the Reber grammar by means of formal analysis and network simulations. We outline a new method for describing the network dynamics and propose an approach to grammar extraction based on the state-space dynamics of the network. We conclude that statistical frequency-based and rule-based acquisition procedures can be viewed as complementary perspectives on grammar learning, and more generally, that classical cognitive models can be viewed as a special case of a dynamical systems perspective on information processing. Keywords: artificial grammar learning, neural network, dynamical systems. Introduction T According to Chomsky a core feature of natural language processing is the ‘infinite use of finite means’. The family of right-linear phrase structure grammars, implementable in the finite-state architecture (FSA), is a simple formal model of this idea. The work of Reber (1967) suggested that humans can learn AGs implicitly and that the relevant structure is abstracted from the input. Reber (1967) proposed that this process is intrinsic to natural language learning and it has been suggested that AG learning (AGL) is a relevant model for aspects of language acquisition (Gomez & Gerken, 2000). Recent FMRI studies indicate that language related brain regions are engaged in AG processing (Petersson et al., 2004). Here we investigate the Reber grammar (Figure 1) by means of formal analysis and network simulations and outline an approach to grammar extraction based on the network state-space dynamics. Elementary formal analysis We begin by showing that the Reber grammar, and in certain respects similar AGs, can be learned by acquiring a finite set of n-grams (for details see Grenholm, 2003). To this end, we modify the Reber machine (RM0) by connecting the final to the initial state with a #-labeled V M T X R M V X R Figure 1: The transition graph representation of RM1 corresponding to the Reber grammar. We obtain the set of possible trigrams from RM2 if we take these 21 bigrams and extend them according to Figure 2, yielding TG={MTT,MTV,…,#VX}. It is clear that a string is an output of RM1, if it starts with ‘#’ and only contains trigrams in TG. Conversely, assume that a string begins with one of the 48 trigrams, the first symbol of which is ‘#’. The only possibilities are ‘#MT’, ‘#MV’, and ‘#VX’, which

[1]  W S McCulloch,et al.  A logical calculus of the ideas immanent in nervous activity , 1990, The Philosophy of Artificial Intelligence.

[2]  Elaine J. Weyuker,et al.  Computability, complexity, and languages - fundamentals of theoretical computer science , 2014, Computer science and applied mathematics.

[3]  A. Reber Implicit learning of artificial grammars , 1967 .

[4]  Fenna H. Poletiek,et al.  Implicit learning of artificial grammar , 2002 .

[5]  Karl Magnus Petersson,et al.  Learning and memory in the human brain , 2005 .

[6]  Cristopher Moore,et al.  Generalized shifts: unpredictability and undecidability in dynamical systems , 1991 .

[7]  Karl Magnus Petersson The human brain, language, and implicit learning , 2004 .

[8]  Elaine J. Weyuker,et al.  Computability, complexity, and languages , 1983 .

[9]  S. Hyakin,et al.  Neural Networks: A Comprehensive Foundation , 1994 .

[10]  Aravind K. Joshi,et al.  Tree-Adjoining Grammars , 1997, Handbook of Formal Languages.

[11]  정혜선 Implicit Learning with Artificial Grammar , 2003 .

[12]  E. Edwards. Communication theory. , 1967, Ergonomics.

[13]  D. C. Cooper,et al.  Theory of Recursive Functions and Effective Computability , 1969, The Mathematical Gazette.

[14]  J. McCauley Chaos, dynamics, and fractals : an algorithmic approach to deterministic chaos , 1993 .

[15]  Karl Magnus Petersson,et al.  Artificial syntactic violations activate Broca's region , 2004, Cogn. Sci..

[16]  Wolfgang Maass,et al.  Lower Bounds for the Computational Power of Networks of Spiking Neurons , 1996, Neural Computation.

[17]  Karl Magnus Petersson,et al.  On the relevance of the neurobiological analogue of the finite-state architecture , 2005, Neurocomputing.

[18]  R. Badii,et al.  Complexity: Hierarchical Structures and Scaling in Physics , 1997 .

[19]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[20]  M. Mackey,et al.  Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics , 1998 .

[21]  Denise Brandão de Oliveira e Britto,et al.  The faculty of language , 2007 .

[22]  Noam Chomsky Three Factors in Language Design , 2005, Linguistic Inquiry.

[23]  H. Siegelmann,et al.  Analog computation with dynamical systems , 1998 .

[24]  W. Maass,et al.  What makes a dynamical system computationally powerful ? , 2022 .

[25]  R. Gómez,et al.  Infant artificial language learning and language acquisition , 2000, Trends in Cognitive Sciences.