Finite impulse response neural networks with applications in time series prediction

Traditional feedforward neural networks are static structures which simply map input to output. Motivated from biological considerations, a dynamic network is proposed which uses Finite Impulse Response (FIR) linear filters to model the processes of axonal transport, synaptic modulation, and membrane charge dissipation. Effectively all weights in the static feedforward network are replaced by adaptive FIR filters. A training algorithm based on gradient descent is derived for the FIR structure. The algorithm, termed temporal backpropagation, is shown to be a direct temporal and vectorial extension of the popular backpropagation algorithm. Various properties including computational complexity and learning characteristics are explored. The FIR network can be viewed as an adaptive nonlinear filter with applications encompassing those of traditional adaptive filters and systems. In this dissertation, we concentrate on the FIR network for use in nonlinear time series prediction. Various examples including laboratory data, chaotic time series, and financial data are studied. Iterated predictions and reconstruction of underlying chaotic attractors are used to demonstrate the capabilities of the network and methodology. The theoretical motivations for using networks in prediction are also addressed. In looking for a more direct method to derive temporal backpropagation, we introduce and prove a unifying principle called Network Reciprocity. The method, based on simple rules of block diagram manipulation, allows for an almost effortless formulation of neural network algorithms. The approach is illustrated by deriving a variety of algorithms including standard and temporal backpropagation, backpropagation-through-time for recurrent networks and control structures, an efficient method for training cascaded nonlinear filters, and algorithms for networks composed of Infinite Impulse Response (IIR) and lattice filters.