Training Synaptic Delays in a Recurrent Neural Network

Algorithms for training recurrent neural networks (RNN), both with and without time delays, to perform certain tasks are notorious as a major computer resources consumers. These algorithms adapt a set of network parameters for performing certain tasks typical for the network's architecture. The addition of the possibility for synaptic delays modiication enlarges the parameter space of the network, thus making the training task even more demanding and the network more sensitive to the choice of parameters. The main incentives for adding this possibility are rooted in our knowledge about biological nets as well as in practical and applicable reasons, ranging from enhancing generalization to enhancing network capabilities. In this work we show how addition of time delay parameters results in incrementing both network capacity and capabilities and enabling the use of smaller and more competent nets for the same tasks. In addition, the current work introduces several methods to accomplish recurrent neural networks training, optimizing both the synaptic time delays and the weights. Several training methods are evaluated. The most eecient one, based on adaptive simulated an-ii nealing, enables training of recurrent neural networks with synaptic time delays even on a low-end (Personal Computer) platform. The performance of the various training methods and the capabilities of recurrent neural networks with time delays as a whole are examined via extensive computer simulations using typical benchmark tasks. In addition, special new conngurations of recurrent neural networks with time delays, modiications of the general connguration, are presented and examined together with special training algorithms.

[1]  A PearlmutterBarak Learning state space trajectories in recurrent neural networks , 1989 .

[2]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[3]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[4]  L. Ingber Very fast simulated re-annealing , 1989 .

[5]  Richard Rohwer,et al.  The "Moving Targets" Training Algorithm , 1989, NIPS.

[6]  D. Mitra,et al.  Convergence and finite-time behavior of simulated annealing , 1985, 1985 24th IEEE Conference on Decision and Control.

[7]  J. M. Watt Numerical Initial Value Problems in Ordinary Differential Equations , 1972 .

[8]  Jacob Barhen,et al.  Adjoint-Functions and Temporal Learning Algorithms in Neural Networks , 1990, NIPS.

[9]  Lester Ingber,et al.  Simulated annealing: Practice versus theory , 1993 .

[10]  Anders Krogh,et al.  A Cost Function for Internal Representations , 1989, NIPS.

[11]  L. Darrell Whitley,et al.  Genetic algorithms and neural networks: optimizing connections and connectivity , 1990, Parallel Comput..

[12]  Michael R. Davenport,et al.  Continuous-time temporal back-propagation with adaptable time delays , 1993, IEEE Trans. Neural Networks.

[13]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[14]  N. Metropolis,et al.  Equation of State Calculations by Fast Computing Machines , 1953, Resonance.