Implicit Sequence Learning - A Case Study with a 4-2-4 Encoder Simple Recurrent Network

Without any doubt the temporal order inherent in a task is an important issue during human learning. Recurrent neural networks are known to be a useful tool to model implicit sequence learning. In terms of the psychology of learning, recurrent networks might be suitable to build a model to reproduce the data obtained from experiments with human subjects. Such model should not just reproduce the data but also explain it and further make verifiable predictions. Therefore, one basic requirement is an understanding of the processes in the network during learning. In this paper, we investigate how (implicitly learned) temporal information is stored/represented in a simple recurrent network. To be able to study detailed effects we use a small network and a standard encoding task for this study.