Time Series Prediction Using Dynamic Ridge Polynomial Neural Networks

Novel higher order polynomial neural network architecture is presented in this paper. The new proposed neural network is called Dynamic Ridge Polynomial neural network that combines the properties of higher order and recurrent neural networks. The advantage of this type of network is that it exploits the properties of higher-order neural networks by functionally extending the input space into a higher dimensional space, where linear separability is possible, without suffering from the combinatorial explosion in the number of weights. Furthermore, the network has a regular structure, since the order can be suitably augmented by additional sigma units. Finally, the presence of the recurrent link expands the network’s ability for attractor dynamics and storing information for later use. The performance of the network is tested for the prediction of nonlinear and nonstationary time series. Two popular time series, the Lorenz attractor and the mean value of the AE index, are used in our studies. The simulation results showed better results in terms of the signal to noise ratio in comparison to a number of higher order and feedforward networks.

[1]  Michael I. Jordan Serial Order: A Parallel Distributed Processing Approach , 1997 .

[2]  Les Atlas,et al.  Recurrent neural networks and time series prediction , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[3]  Geoffrey E. Hinton,et al.  Learning representations by back-propagation errors, nature , 1986 .

[4]  Yoh-Han Pao,et al.  Adaptive pattern recognition and neural networks , 1989 .

[5]  David E. Rumelhart,et al.  Product Units: A Computationally Powerful and Biologically Plausible Extension to Backpropagation Networks , 1989, Neural Computation.

[6]  B. G. Mertzios,et al.  Ridge polynomial networks in pattern recognition , 2003, Proceedings EC-VIP-MC 2003. 4th EURASIP Conference focused on Video/Image Processing and Multimedia Communications (IEEE Cat. No.03EX667).

[7]  Joydeep Ghosh,et al.  The pi-sigma network: an efficient higher-order neural network for pattern classification and function approximation , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[8]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[9]  Sameer Singh FUZZY NEAREST NEIGHBOUR METHOD FOR TIME-SERIES FORECASTING 1 , 1998 .

[10]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[11]  Ron Cass,et al.  Adaptive Process Optimization using Functional-Link Networks and Evolutionary Optimization , 1996 .

[12]  J. Makhoul,et al.  Linear prediction: A tutorial review , 1975, Proceedings of the IEEE.

[13]  Akio Miyazaki,et al.  A FORECASTING METHOD FOR TIME SERIES WITH FRACTAL GEOMETRY AND ITS APPLICATION , 1999 .

[14]  Simon Haykin,et al.  A dynamic regularized radial basis function network for nonlinear, nonstationary time series prediction , 1999, IEEE Trans. Signal Process..

[15]  An-Sing Chen,et al.  Regression neural network for error correction in foreign exchange forecasting and trading , 2004, Comput. Oper. Res..

[16]  Gaetan Libert,et al.  Dynamic recurrent neural networks: a dynamical analysis , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[17]  D. Fefer,et al.  Time series prediction with neural networks: a case study of two examples , 1994, Conference Proceedings. 10th Anniversary. IMTC/94. Advanced Technologies in I & M. 1994 IEEE Instrumentation and Measurement Technolgy Conference (Cat. No.94CH3424-9).

[18]  Teodor Marcu,et al.  SYSTEM IDENTIFICATION USING FUNCTIONAL - LINK NEURAL NETWORKS WITH DYNAMIC STRUCTURE , 2002 .

[19]  Chin-Hsiung Loh,et al.  Nonlinear Identification of Dynamic Systems Using Neural Networks , 2001 .

[20]  V. Ramamurti,et al.  A recurrent neural network for nonlinear time series prediction-a comparative study , 1992, Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop.

[21]  Thomas Jackson,et al.  Neural Computing - An Introduction , 1990 .

[22]  Stephen A. Billings,et al.  Properties of neural networks with applications to modelling non-linear dynamical systems , 1992 .

[23]  Joydeep Ghosh,et al.  Ridge polynomial networks , 1995, IEEE Trans. Neural Networks.

[24]  Amir F. Atiya,et al.  New results on recurrent network training: unifying the algorithms and accelerating convergence , 2000, IEEE Trans. Neural Networks Learn. Syst..

[25]  Shingo Tomita,et al.  On a higher-order neural network for distortion invariant pattern recognition , 1994, Pattern Recognit. Lett..

[26]  Georg Thimm,et al.  Optimization of high order perceptrons , 1997 .

[27]  Joydeep Ghosh Computationally Efficient Invariant Pattern Recognition with Higher Order Pi-sigma Networks 1 , 1992 .

[28]  Joydeep Ghosh,et al.  Computationally efficient invariant pattern classification with higher-order pi-sigma networks , 1992 .

[29]  Massimiliano Versace,et al.  Predicting the exchange traded fund DIA with a combination of genetic algorithms and neural networks , 2004, Expert Syst. Appl..

[30]  J. Nazuno Haykin, Simon. Neural networks: A comprehensive foundation, Prentice Hall, Inc. Segunda Edición, 1999 , 2000 .

[31]  Jochen J. Steil Stability of backpropagation-decorrelation efficient O(N) recurrent learning , 2005, ESANN.

[32]  Amir F. Atiya Learning on a General Network , 1987, NIPS.

[33]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[34]  Colin Giles,et al.  Learning, invariance, and generalization in high-order neural networks. , 1987, Applied optics.

[35]  Joydeep Ghosh,et al.  Efficient Higher-Order Neural Networks for Classification and Function Approximation , 1992, Int. J. Neural Syst..

[36]  Ying Lin Yu,et al.  Modified sigma-pi BP network with self-feedback and its application in time series , 1994, Defense, Security, and Sensing.