Self-Organized Artificial Grammar Learning in Spiking Neural Networks Renato Duarte (renato.duarte@bcf.uni-freiburg.de) Bernstein Center Freiburg, Albert-Ludwigs University Freiburg im Breisgau, 79104 Germany & Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, EH8 9AB, United Kingdom Peggy Seri`es (pseries@inf.ed.ac.uk) Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, EH8 9AB, United Kingdom Abigail Morrison (morrison@fz-juelich.de) Bernstein Center Freiburg, Albert-Ludwigs University Freiburg im Breisgau, 79104 Germany & Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, J¨ulich Research Center, 52425, Germany & Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, 44801, Germany Abstract plex computational processes to the underlying neuronal pro- cesses and assessing the properties of the neuronal system re- sponsible for their implementation is not straightforward, but it is likely to yield important insights into the nature of neural computation. The Artificial Grammar Learning (AGL) paradigm provides a means to study the nature of syntactic processing and implicit sequence learning. With mere exposure and without perfor- mance feedback, human beings implicitly acquire knowledge about the structural regularities implemented by complex rule systems. We investigate to which extent a generic cortical mi- crocircuit model can support formally explicit symbolic com- putations, instantiated by the same grammars used in the hu- man AGL literature and how a functional network emerges, in a self-organized manner, from exposure to this type of data. We use a concrete implementation of an input-driven recurrent network composed of noisy, spiking neurons, built according to the reservoir computing framework and dynamically shaped by a variety of synaptic and intrinsic plasticity mechanisms operating concomitantly. We show that, when shaped by plas- ticity, these models are capable of acquiring the structure of a simple grammar. When asked to judge string legality (in a manner similar to human subjects), the networks perform at a qualitatively comparable level. Artificial Grammar Learning The problem of sequence learning has a long tradition in cog- nitive science and psycholinguistic research. Considerable effort has been devoted to the question of whether and under which conditions, the acquisition of complex, rule-governed knowledge can be performed in an incidental or implicit man- ner, i.e., “without any requirements of awareness of either the process or the product of acquisition”(A. S. Reber, Walken- feld, & Hernstadt, 1991). These studies exploit the fact that our ability to deal with complex sequential structure is most evident in language acquisition and processing, transforming the problem of sequence learning into the largely equivalent problem of grammar learning, which can be addressed within the domain of language syntax. In fact, growing evidence suggests that language acquisition and processing is medi- ated by implicit sequence learning and structured sequence processing (K. M. Petersson & Hagoort, 2010), thus involv- ing common mechanisms. A typical AGL experiment consists of a learning or acqui- sition phase and a test phase. During acquisition, participants are exposed to a set of symbol sequences generated from a formal grammar (a complex rule system, whose rules can be described by the allowed transitions of a directed graph, e.g. Figure 1), often in the form of a short-term memory task. During the subsequent test phase, subjects are informed about the existence of an underlying set of rules and instructed to classify a novel set of sequences as grammatical or not, based on their immediate intuitive judgement. A robust and well replicated finding is that subjects per- form significantly above chance, and performance improves if subjects are exposed to multiple sessions of implicit acqui- sition. This means that humans are able to acquire knowledge Keywords: Sequence Learning; Self-Organization; Plasticity; Artificial Grammar Learning; Introduction Sequential organization is a ubiquitous facet of adaptive cog- nition and behavior. Many of our more fundamental abilities reflect some form of adaptation to the structural regularities of sensory events, as they unfold over time, and the extraction and use of such regularities. In order to adequately navigate complex, dynamic envi- ronments, an agent ought to be able to represent and process sequences of information, use this information in a predic- tive context, to make inferences about what will happen next, when it will happen and how to react to it, and assemble ele- mentary responses into novel action sequences. It is thus of central importance to elucidate how knowl- edge about sequential structure is acquired, represented in memory and expressed in behavior, and to understand the nature and characteristics of such knowledge representations and of the underlying acquisition mechanisms. Importantly, such pursuit must be grounded by the biophysical properties of the neural processing infrastructure. Mapping such com-
[1]
Jeffrey L. Elman,et al.
Distributed Representations, Simple Recurrent Networks, and Grammatical Structure
,
1991,
Mach. Learn..
[2]
Guillén Fernández,et al.
Neural correlates of artificial syntactic structure classification
,
2006,
NeuroImage.
[3]
A. Reber.
Implicit learning of artificial grammars
,
1967
.
[4]
J. Elman.
Distributed Representations, Simple Recurrent Networks, And Grammatical Structure
,
1991
.
[5]
Christos Dimitrakakis,et al.
Network Self-Organization Explains the Statistics and Dynamics of Synaptic Connection Strengths in Cortex
,
2013,
PLoS Comput. Biol..
[6]
Peter Hagoort,et al.
The neurobiology of syntax: beyond string sets
,
2012,
Philosophical Transactions of the Royal Society B: Biological Sciences.
[7]
Herbert Jaeger,et al.
Reservoir computing approaches to recurrent neural network training
,
2009,
Comput. Sci. Rev..
[8]
Karl Magnus Petersson,et al.
Artificial grammar learning and neural networks
,
2005
.
[9]
Christos Dimitrakakis,et al.
Bayesian variable order Markov models
,
2010,
AISTATS.
[10]
L. F. Abbott,et al.
Generating Coherent Patterns of Activity from Chaotic Neural Networks
,
2009,
Neuron.
[11]
Arthur S. Reber,et al.
Implicit and explicit learning: individual differences and IQ.
,
1991
.
[12]
Ran El-Yaniv,et al.
On Prediction Using Variable Order Markov Models
,
2004,
J. Artif. Intell. Res..