Connectionist networks learn to transmit chaos

Evidence presented in the preceding paper indicates that the activity of some neurons during the generation of coordinated motor patterns may be attributable to chaos. Because even "simple" biological systems are difficult to control, we have used connectionist networks in order to inquire into the question of whether a chaotic signal originating in one part of the nervous system can be learned and transmitted by another. We have examined a number of different architectures, and report here the findings for a simple network consisting of one input unit, four hidden units, and one output unit. During training sessions, the input of the circuit was given analog values of either the 3.60 or 3.95 logistic equation, or of one variable of the three-variable Rössler attractor. The backpropagated error in the learning algorithm was a function of the difference between the input value and the output at each iteration. Iterations involving small changes in analog value resulted in good similarity between the input and output signals, but little learning occurred because of the small error propagated back to the synapses. With larger differences in the analog values (and larger feedback error) at each iteration, we found that networks learned to transmit different chaotic attractors. Once the network learned one input, it could transmit another without changing the synapses. Increasing the number of hidden units increased the rate of learning.(ABSTRACT TRUNCATED AT 250 WORDS)