A comparison of feed-forward and recurrent neural networks in time series forecasting

Forecasting performances of feed-forward and recurrent neural networks (NN) trained with different learning algorithms are analyzed and compared using the Mackey-Glass nonlinear chaotic time series. This system is a known benchmark test whose elements are hard to predict. Multi-layer Perceptron NN was chosen as a feed-forward neural network because it is still the most commonly used network in financial forecasting models. It is compared with the modified version of the so-called Dynamic Multi-layer Perceptron NN characterized with a dynamic neuron model, i.e., Auto Regressive Moving Average filter built into the hidden layer neurons. Thus, every hidden layer neuron has the ability to process previous values of its own activity together with new input signals. The obtained results indicate satisfactory forecasting characteristics of both networks. However, recurrent NN was more accurate in practically all tests using less number of hidden layer neurons than the feed-forward NN. This study once again confirmed a great effectiveness and potential of dynamic neural networks in modeling and predicting highly nonlinear processes. Their application in the design of financial forecasting models is therefore most recommended.

[1]  Fred Collopy,et al.  How effective are neural networks at forecasting and prediction? A review and evaluation , 1998 .

[2]  Tom Tollenaere,et al.  SuperSAB: Fast adaptive back propagation with good scaling properties , 1990, Neural Networks.

[3]  Bernard Widrow,et al.  Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[4]  Xin Li,et al.  Application of Neural Networks in Financial Data Mining , 2007, International Conference on Computational Intelligence.

[5]  Martin A. Riedmiller,et al.  Advanced supervised learning in multi-layer perceptrons — From backpropagation to adaptive learning algorithms , 1994 .

[6]  Mihiar Ayoubi,et al.  Dynamic Neural Units for Nonlinear Dynamic Systems Identification , 1995, IWANN.

[7]  Christian Igel,et al.  Empirical evaluation of the improved Rprop learning algorithms , 2003, Neurocomputing.

[8]  Martin P. Wallace Neural Networks and their Application to Finance , 2011 .

[9]  Chi-Chung Lam,et al.  FINANCIAL TIME SERIES FORECASTING BY NEURAL NETWORK USING CONJUGATE GRADIENT LEARNING ALGORITHM AND MULTIPLE LINEAR REGRESSION WEIGHT INITIALIZATION , 2000 .

[10]  Jingtao Yao,et al.  Guidelines for Financial Forecasting with Neural Networks , 2001 .

[11]  Russell Beale,et al.  Handbook of Neural Computation , 1996 .

[12]  Andrew Skabar,et al.  Neural Networks and Financial Trading and the Efficient Markets Hypothesis , 2002, ACSC.

[13]  A. Lapedes,et al.  Nonlinear signal processing using neural networks: Prediction and system modelling , 1987 .

[14]  M. Thenmozhi,et al.  FORECASTING STOCK INDEX RETURNS USING NEURAL NETWORKS , 2022 .

[15]  Barak A. Pearlmutter Gradient Descent: Second Order Momentum and Saturating Error , 1991, NIPS.

[16]  Ramon Lawrence,et al.  Using Neural Networks to Forecast Stock Market Prices , 2000 .

[17]  Christian Igel,et al.  Improving the Rprop Learning Algorithm , 2000 .