Discrete time neural networks

Traditional feedforward neural networks are static structures that simply map input to output. To better reflect the dynamics in the biological system, time dependency is incorporated into the network by using Finite Impulse Response (FIR) linear filters to model the processes of axonal transport, synaptic modulation, and charge dissipation. While a constructive proof gives a theoretical equivalence between the class of problems solvable by the FIR model and the static structure, certain practical and computational advantages exist for the FIR model. Adaptation of the network is achieved through an efficient gradient descent algorithm, which is shown to be a temporal generalization of the popular backpropagation algorithm for static networks. Applications of the network are discussed with a detailed example of using the network for time series prediction.

[1]  Hübner,et al.  Dimensions and entropies of chaotic intensity pulsations in a single-mode far-infrared NH3 laser. , 1989, Physical review. A, General physics.

[2]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[3]  Geoffrey E. Hinton,et al.  Phoneme recognition using time-delay neural networks , 1989, IEEE Trans. Acoust. Speech Signal Process..

[4]  A. S. Weigend,et al.  Results of the time series prediction competition at the Santa Fe Institute , 1993, IEEE International Conference on Neural Networks.

[5]  Bernard Widrow,et al.  30 years of adaptive neural networks: perceptron, Madaline, and backpropagation , 1990, Proc. IEEE.

[6]  Eric A. Wan,et al.  Temporal backpropagation for FIR neural networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[7]  G. Yule On a Method of Investigating Periodicities in Disturbed Series, with Special Reference to Wolfer's Sunspot Numbers , 1927 .

[8]  Eric Wan,et al.  Finite Impulse Response Neural Networks for Autoregressive Time Series Prediction , 1993 .

[9]  B. Irie,et al.  Capabilities of three-layered perceptrons , 1988, IEEE 1988 International Conference on Neural Networks.

[10]  Idan Segev,et al.  Methods in neuronal modeling: From synapses to networks , 1989 .

[11]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[12]  Halbert White,et al.  Learning in Artificial Neural Networks: A Statistical Perspective , 1989, Neural Computation.

[13]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[14]  William W. S. Wei,et al.  Time series analysis - univariate and multivariate methods , 1989 .

[15]  Eric A. Wan Temporal Backpropagation: An Efficient Algorithm for Finite Impulse Response Neural Networks , 1991 .

[16]  D. Junge Nerve and muscle excitation , 1976 .

[17]  T. Lai Time series analysis univariate and multivariate methods , 1991 .

[18]  John J. Hopfield,et al.  Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit , 1986 .

[19]  Lennart Ljung,et al.  System Identification: Theory for the User , 1987 .

[20]  Bernard Widrow,et al.  Adaptive Signal Processing , 1985 .

[21]  Eric A. Wan,et al.  Neural network classification: a Bayesian interpretation , 1990, IEEE Trans. Neural Networks.

[22]  A. Weigend,et al.  Time Series Prediction: Forecasting the Future and Understanding the Past. Proceedings of the NATO Advanced Research Workshop on a Comparative Time Series Analysis Held in Santa Fe, New Mexico, 14-17 May 1992. , 1994 .

[23]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..