In this thesis, artificial neural networks (ANNs) are used for prediction of financial and macroeconomic time series. ANNs build internal models of the problem and are therefore suited for fields in which accurate mathematical models
cannot be formed, e.g. meteorology and economics. Feedforward neural networks (FFNNs), often trained with backpropagation, constitute a common type of ANNs. However, FFNNs suffer from lack of short-term memory, i.e. they respond with the same output for a given input, regardless of earlier inputs. In addition, backpropagation only tunes the weights of the networks and does not generate an optimal design. In this thesis, recurrent neural networks (RNNs), trained with an evolutionary algorithm (EA) have been used instead. RNNs can have short-term memory and the EA has the advantage that it affects the architecture of the networks and not only the weights. However, the RNNs are often hard to train, i.e. the training algorithm tends to get stuck in local optima. In order to overcome
this problem, a method is presented in which the initial population in the EA is an FFNN, pre-trained with backpropagation. During the evolution feedback connections
are allowed, which will transform the FFNN to an RNN.
The RNNs obtained with both methods outperform both a predictor and the FFNN trained with backpropagation on several financial and macroeconomic time series. The improvement of the prediction error is small, but significant (a few per cent for the validation data set).
[1]
Spyros Makridakis,et al.
Forecasting Methods for Management
,
1989
.
[2]
Jingtao Yao,et al.
Foreign Exchange Rates Forecasting with Neural NetworksJingtao
,
1996
.
[3]
Mark Williams,et al.
Modelling and Trading the EUR / USD Exchange Rate : Do Neural Network Models Perform Better ?
,
2002
.
[4]
Paul J. Werbos,et al.
Backpropagation Through Time: What It Does and How to Do It
,
1990,
Proc. IEEE.
[5]
X. Yao.
Evolving Artificial Neural Networks
,
1999
.
[6]
J. Moody.
Economic forecasting : challenges and neural network solutions
,
1995
.
[7]
Ronald J. Williams,et al.
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
,
1989,
Neural Computation.
[8]
John Moody,et al.
Learning rate schedules for faster stochastic gradient search
,
1992,
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop.
[9]
R. Lewontin.
‘The Selfish Gene’
,
1977,
Nature.
[10]
Ah Chung Tsoi,et al.
Noisy Time Series Prediction using Recurrent Neural Networks and Grammatical Inference
,
2001,
Machine Learning.