A Spiking Neuron Model of Serial-Order Recall Feng-Xuan Choo (fchoo@uwaterloo.ca) Chris Eliasmith (celiasmith@uwaterloo.ca) Center for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada N2L 3G1 Abstract vectors such that the result is another vector that is similar to the original input vectors. Third, a binding operation (⊗) is used to combine vectors such that the result is a vector that is dissimilar to original vectors. Last, an approximate inverse operation (denoted with ∗ , such that A ∗ is the approximate in- verse of A) is needed so that previously bound vectors can be unbound. A ⊗ B ⊗ B ∗ ≈ A Vector symbolic architectures (VSAs) have been used to model the human serial-order memory system for decades. Despite their success, however, none of these models have yet been shown to work in a spiking neuron network. In an effort to take the first step, we present a proof-of-concept VSA-based model of serial-order memory implemented in a network of spiking neurons and demonstrate its ability to successfully encode and decode item sequences. This model also provides some insight into the differences between the cognitive processes of mem- ory encoding and subsequent recall, and establish a firm foun- dation on which more complex VSA-based models of memory can be developed. Keywords: Serial-order memory; serial-order recall; vector symbolic architectures; holographic reduced representation; population coding; LIF neurons; neural engineering frame- work Just like addition and multiplication, the VSA operations are associative, commutative, and distributive. The class of VSA used in this model is the Holographic Reduced Representation (HRR) (Plate, 2003). In this repre- sentation, each element of an HRR vector is chosen from a normal distribution with a mean of 0, and a variance of 1/n where n is the number of elements there are in the vector. The standard addition operator is used to perform the superposi- tion operation, and the circular convolution operation is used to perform the binding operation. The circular convolution of two vectors can be efficiently computed by utilizing the Fast Fourier Transform (FFT) algorithm: Introduction The human memory system is able to perform a multitude of tasks, one of which is the ability to remember and recall sequences of serially ordered items. In human serial recall experiments, subjects are presented items at a fixed interval, typically in the range of two items per second up to one item every 4 seconds. After the entire sequence has been presented the subjects are then asked to recall the items presented to them, either in order (serial recall), or in any order the sub- ject desires (free recall). Plotting the recall accuracy of the subjects, experimenters often obtain a graph with a distinc- tive U-shape. This unique shape arises from what is known as the primacy and recency effects. The primacy effect refers to the increase in recall accuracy the closer the item is to the start of the sequence, and the recency effect refers to the same increase in recall accuracy as the item gets closer to the end of the sequence. Many models have been proposed to explain this peculiar behaviour in the recall accuracy data. Here we will concen- trate on one class of models which employ vector symbolic architectures (VSAs) to perform the serial memory and re- call. Using VSAs to perform serial memory tasks would be insufficient however, if the VSA-based model cannot be im- plemented in spiking neurons, and thus, cannot be used to explain what the brain is actually doing. In this paper, we thus present a proof-of-concept VSA-based model of serial recall implemented using spiking neurons. x ⊗ y = F −1 ( F (x) F (y)), where F and F −1 are the FFT and inverse FFT operations respectively, and is the element-wise multiplication of the two vectors. The circular convolution operation, unlike the standard convolution operation, does not change the dimen- sionality of the result vector. This makes the HRR extremely suitable for a neural implementation because it means that the dimensionality of the network remains constant regardless of the number of operations performed. The VSA-based Approach to Serial Memory There are multiple ways in which VSAs can be used to encode serially ordered items into a memory trace. The CADAM model (Liepa, 1977) provides a simple example of how a sequence of items can be encoded as a single mem- ory trace. In the CADAM model, the sequence containing the items A, B, and C would be encoded as in single memory trace, M ABC as follows: M A = A M AB = A + A ⊗ B Vector Symbolic Architecture There are four core features of vector symbolic architectures. First, information is represented by randomly chosen vectors that are combined in a symbol-like manner. Second, a super- position operation (here denoted with a +) is used to combine M ABC = A + A ⊗ B + A ⊗ B ⊗ C The model presented in this paper, however, takes inspira- tion from behavioural data obtained from macaque monkeys. This data suggests that each sequence item is encoded using
[1]
Earl K Miller,et al.
The representation of multiple objects in prefrontal neuronal delay activity.
,
2007,
Cerebral cortex.
[2]
A. Baddeley.
Working Memory, Thought, and Action
,
2007
.
[3]
Terrence C. Stewart,et al.
A biologically realistic cleanup memory: Autoassociation in spiking neurons
,
2011,
Cognitive Systems Research.
[4]
D. Norris,et al.
THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1996, 49A (1), 80 ± 115 Unchained Memory: Error Patterns Rule out Chaining Models of Immediate Serial Recall
,
2022
.
[5]
A. P. Georgopoulos,et al.
Neuronal population coding of movement direction.
,
1986,
Science.
[6]
Tony A. Plate,et al.
Holographic Reduced Representation: Distributed Representation for Cognitive Structures
,
2003
.
[7]
Chris Eliasmith,et al.
Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems
,
2004,
IEEE Transactions on Neural Networks.
[8]
Shaul Hochstein,et al.
Macaque monkeys categorize images by their ordinal number
,
2000,
Nature.