Holographic stimulus representation and judgement of grammaticality in an exemplar model: Combining item and serial-order information - eScholarship

Holographic stimulus representation and judgement of grammaticality in an exemplar model: Combining item and serial-order information Randall K. Jamieson (randy_jamieson@umanitoba.ca) Department of Psychology, University of Manitoba Winnipeg, MB, R3T 2N2, Canada D. J. K. Mewhort (mewhortd@queensu.ca) Department of Psychology, Queen’s University Kingston, ON, K7L 3N6, Canada Abstract We examine representation assumptions for learning in the artificial grammar task. Strings of letters can be represented by first building vectors to represent individual letters and then concatenating the letter vectors into a vector of larger dimensionality. Although such a representation works well in selected examples of artificial-grammar learning, it fails in examples that depend on left-to-right serial information. We show that recursive convolution solves the problem by combining item and serial-order information in a stimulus item into a distributed data structure. We import the representations into an established model of human memory. The new scheme succeeds not only in applications that were successful using concatenation but also in applications that depend on left-to-right serial organization. Keywords: Artificial grammar representation; Exemplar model learning; Holographic Introduction In an artificial-grammar learning (AGL) classification task, participants study strings of symbols. Following study, the participants are told that the studied items were constructed according to the rules of an artificial grammar and are invited to sort novel rule-based (grammatical) exemplars from novel rule-violating (ungrammatical) ones. Even though the participants are unable to describe the rules, they can discriminate the two classes of stimuli. Initial accounts proposed that the participants abstracted the grammar and used that knowledge to judge the status of the exemplars (e.g., Reber, 1967, 1993). Later investigators argued that the participants judged grammaticality without reference to the grammar. To support the latter position, investigators identified several sources of information that discriminate the two classes of test strings. Brooks (1978) suggested that whole-item similarity between training and test strings is used to infer grammaticality. Perruchet and Pacteau (1990) argued that bigram overlap is used to infer grammaticality. Vokey and Brooks (1992) identified edit distance as a predictor, and Brooks and Vokey (1991) argued that patterns of repetition within a string are used to infer grammaticality. Knowlton and Squire (1996) identified associative chunk strength (ACS), and Johnstone and Shanks (1999) identified chunk novelty. Finally, Jamieson and Mewhort (2009a, 2010) showed that global similarity predicts performance in the task. Regression analyses designed to sort the various predictors have confirmed a role for all of them (e.g., Johnstone & Shanks, 1999). Factorial designs that have pitted predictors against one another have been unable to identify a single dominant predictor (e.g., Kinder & Lotz, 2009; Vokey & Brooks, 1992). We think that many of the predictors (e.g., ACS, bigram over, etc) point to a common underlying factor, namely left- to-right serial structure. If so, the problem is not to determine which predictor dominates but, rather, to decide how subjects encode material so that they have access to the left-to-right serial structure in the exemplars. In this paper, we explore an encoding mechanism that folds several orders of left-to-right serial structure in a string into a coherent and distributed data structure (i.e., single letters, bi-grams, trigrams, and whole strings). To begin,we describe the representation scheme. After, we show that the new representations predict judgement of grammaticality when used in an established theory of retrieval (Jamieson & Mewhort, 2009a, 2010). Holographic representation in memory Many investigators have proposed that light holography provides a mathematical basis for memory representation (Borsellino & Poggio, 1973; Gabor, 1968; Khan, 1998; Longuet-Higgins, 1968; Poggio, 1973). Murdock’s (1982, 1983, 1997) TODAM is probably the best-known use of the idea in experimental psychology. In TODAM, stimulus associations are formed using linear convolution and associations are unpacked using correlation (deconvolution). More recently, Jones and Mewhort (2007) used recursive circular convolution (Plate, 1995) to develop a self- organizing model of semantic memory (BEAGLE). BEAGLE captures judgements of semantic typicality, categorization, priming, and syntax from word order. BEAGLE’s ability to handle so many phenomena of semantic memory is in itself impressive. However, from our perspective, BEAGLE’s strength is that it shows how holographic representation can account for complex decision behaviour without adding control structures (e.g., learning and the application of rules). BEAGLE’s success suggests that holographic stimulus representation should be explored in related models of learning and memory. The present work adapts BEAGLE’s representation scheme to represent strings in the artificial grammar classification task.

[1]  James L. McClelland,et al.  Finite State Automata and Simple Recurrent Networks , 1989, Neural Computation.

[2]  H. C. Klonguet-Higgins,et al.  Holographic model of temporal recall. , 1968, Nature.

[3]  D. Shanks,et al.  Two mechanisms in implicit artificial grammar learning? Comment on Meulemans and Van der Linden (1997). , 1999 .

[4]  Douglas J. K. Mewhort,et al.  Applying an exemplar model to the artificial-grammar task: String completion and performance on individual items , 2010, Quarterly journal of experimental psychology.

[5]  D. Shanks IMPLICIT LEARNING AND TACIT KNOWLEDGE - AN ESSAY ON THE COGNITIVE UNCONSCIOUS - REBER,A , 1995 .

[6]  Bennet B. Murdock,et al.  A distributed memory model for serial-order information. , 1983 .

[7]  Kevin N. Gurney,et al.  An introduction to neural networks , 2018 .

[8]  John R. Vokey,et al.  Abstract analogies and abstracted grammars: Comments on Reber and Mathews et al. , 1991 .

[9]  Philip A. Higham,et al.  Opposition logic and neural network models in artificial grammar learning , 2004, Consciousness and Cognition.

[10]  Douglas L. Hintzman,et al.  Judgments of frequency and recognition memory in a multiple-trace memory model. , 1988 .

[11]  Herbert A. Simon The Psychology of Thinking: Embedding Artifice in Nature , 2019, The Sciences of the Artificial.

[12]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[13]  Patrick R. Hof,et al.  Attempting to model dissociations of memory , 2002 .

[14]  John R. Vokey,et al.  Abstract analogies and abstracted grammars: Comments on Reber (1989) and Mathews et al. (1989). , 1991 .

[15]  John R. Vokey,et al.  Salience of Item Knowledge in Learning Artificial Grammars , 1992 .

[16]  T. Poggio,et al.  Convolution and correlation algebras , 1973, Kybernetik.

[17]  Javed I. Khan,et al.  Characteristics of Multidimensional Holographic Associative Memory in Retrieval with Dynamically Localizable Attention Characteristics of Multidimensional Holographic Associative Memory in Retrieval with Dynamically Localizable Attention , 2022 .

[18]  L. Squire,et al.  Artificial grammar learning depends on implicit acquisition of both abstract and exemplar-specific information. , 1996, Journal of experimental psychology. Learning, memory, and cognition.

[19]  B. Murdock Context and mediators in a theory of distributed associative memory (TODAM2). , 1997 .

[20]  Axel Cleeremans,et al.  Computational Models of Implicit Learning , 2019, Implicit Learning.

[21]  Randall K. Jamieson,et al.  Applying an exemplar model to the serial reaction-time task: Anticipating from experience , 2009, Quarterly journal of experimental psychology.

[22]  B. Murdock A Theory for the Storage and Retrieval of Item and Associative Information. , 1982 .

[23]  Douglas L. Hintzman,et al.  "Schema Abstraction" in a Multiple-Trace Memory Model , 1986 .

[24]  Michael N Jones,et al.  Representing word meaning and order information in a composite holographic lexicon. , 2007, Psychological review.

[25]  Randall K. Jamieson,et al.  Applying an Exemplar Model to the Artificial-Grammar Task: Inferring Grammaticality from Similarity , 2009, Quarterly journal of experimental psychology.

[26]  Patrick van der Smagt,et al.  Introduction to neural networks , 1995, The Lancet.

[27]  A Kinder,et al.  The knowledge acquired during artificial grammar learning: Testing the predictions of two connectionist models , 2000, Psychological research.

[28]  A. Reber Implicit learning of artificial grammars , 1967 .

[29]  Pierre Perruchet,et al.  Synthetic grammar learning: Implicit rule abstraction or explicit fragmentary knowledge? Journal of , 1990 .

[30]  Randall K. Jamieson,et al.  Global similarity predicts dissociation of classification and recognition: evidence questioning the implicit-explicit learning distinction in amnesia. , 2010, Journal of experimental psychology. Learning, memory, and cognition.

[31]  Annette Kinder,et al.  Connectionist models of artificial grammar learning: what type of knowledge is acquired? , 2009, Psychological research.

[32]  Tony A. Plate,et al.  Holographic reduced representations , 1995, IEEE Trans. Neural Networks.

[33]  T. Poggio,et al.  On holographic models of memory , 1973, Kybernetik.

[34]  D. GABOR,et al.  Improved Holographic Model of Temporal Recall , 1968, Nature.