Sequential processing by overlap and fatigue of memories

Neural Network (biN) models capture very well the parallel nature of human and animal behavior. However, behavior of humans and animals is highly parallel and highly sequential, and NN models should be able to represent both. In recent years this have been recognized by NN theorists, and several articles on NN models of sequential behavior have been published. There are several problems with these models: 1.) building up hierarchical "chunks" during sequence learning is not possible or clumsy, 2.) human-like flexibility to deal with learned sequences e.g. the ability to reproduce, even if under difficulties, the same sequence in the reverse order is in most cases not possible, 3.) in most cases, the models are applicable only for one type of sequential behavior learning and reproduction of temporal sequences of environmental events. However, other types of sequential behavior like complex decision processes, multiple-step inference processes (production-system-like behavior), or "free association" exist as well. The basic features of the model to be described here are the following: it is a 100% interconnected network with symmetrical weights, a continuously changing activation of the nodes between 0 and 1, a "fatigue" of the activability of nodes as a function of time and current activation, a Hebbian-like learning rule, and a strongly negative starting weight of the connections. With this structure the system can learn distributed memory patterns which act as state attractors (energy minima). However, because of the fatigue of the nodes, the network does not remain in the same energy minimum for ever as it is the case in a Hopfield network without changing the environmental inputs to the network but comes out of it, allowing the system to jump into the next energy minimum. If the network can specify by itself (perhaps using environmental information as well) into which state to jump next, then the network can produce sequences. We have tried out one mechanism for this: the overlap(similarity) of memory patterns.If there are overlaps between the memories, then the system will jump from one energy minimum to the other, with the help of the overlaps. We have applied this basic mechanism to produce 1.) rule-like inference processes, whereby a memory had the function of a rule 2.) learning and understanding of temporal sequences of environmental events Cscript"-leaming and-application). The mechanism can be applied to the latter because of the following constraints assumed to exist in the world: 1.) events in the world are similar to each other and as a consequence, the memories of events have overlaps with each other 2.) within short time periods, the world does not change abruptly it changes smoothly, so that temporary near events are more similar than temporally distant ones. As a consequence, memories of temporally near events have more overlap with each other than those of temporally distant events. The system has shown some interesting "emergent properties" during the simulations, for example, 1.) a process analogous to "automatic backtracking" during rule-based inference processes, 2.) learning of hierarchical sequence chunks during script learning, whereby a learned sequence or an individual subsequence could be activated and run in the correct order if the nodes assuming the role of chunks were activated, 3.) a preference to reproduce a temporal sequence in the same order it has been learned, paired with the possibility to reproduce the sequence after some prompting in the opposite order.