Boosting Recurrent Neural Networks for Time Series Prediction

We adapt a boosting algorithm to the problem of predicting future values of time series, using recurrent neural networks as base learners. The experiments we performed show that boosting actually provides improved results and that the weighted median is better for combining the learners than the weighted mean.

[1]  Gunnar Rätsch,et al.  Barrier Boosting , 2000, COLT.

[2]  Alberto Del Bimbo,et al.  Recurrent neural networks can be trained to be maximum a posteriori probability classifiers , 1995, Neural Networks.

[3]  Harris Drucker,et al.  Improving Regressors using Boosting Techniques , 1997, ICML.

[4]  Ah Chung Tsoi,et al.  A unifying view of some training algorithms for multilayer perceptrons with FIR filter synapses , 1994, Proceedings of IEEE Workshop on Neural Networks for Signal Processing.

[5]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[6]  P. Bühlmann,et al.  Boosting with the L2-loss: regression and classification , 2001 .

[7]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[8]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[9]  Martin Casdagli,et al.  Nonlinear prediction of chaotic time series , 1989 .

[10]  Toniann Pitassi,et al.  A Gradient-Based Boosting Algorithm for Regression Problems , 2000, NIPS.

[11]  Nathan Intrator,et al.  Boosting Regression Estimators , 1999, Neural Computation.

[12]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[13]  Yoav Freund,et al.  Boosting a weak learning algorithm by majority , 1995, COLT '90.

[14]  David P. Helmbold,et al.  Boosting Methods for Regression , 2002, Machine Learning.

[15]  P. Bühlmann,et al.  Volatility estimation with functional gradient descent for very high-dimensional financial time series , 2003 .

[16]  Leo Breiman,et al.  Prediction Games and Arcing Algorithms , 1999, Neural Computation.

[17]  Alexander J. Smola,et al.  Advances in Large Margin Classifiers , 2000 .

[18]  Bart Kosko,et al.  Neural networks for signal processing , 1992 .

[19]  Thomas Richardson,et al.  Boosting methodology for regression problems , 1999, AISTATS.

[20]  R. D. Lorenz,et al.  A structure by which a recurrent neural network can approximate a nonlinear dynamic system , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[21]  Peter L. Bartlett,et al.  Functional Gradient Techniques for Combining Hypotheses , 2000 .

[22]  Anthony J. Robinson,et al.  Boosting the performance of connectionist large vocabulary speech recognition , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[23]  R. Schapire The Strength of Weak Learnability , 1990, Machine Learning.

[24]  John Shawe-Taylor,et al.  Towards a strategy for boosting regressors , 2000 .

[25]  Hirotugu Akaike,et al.  On the Likelihood of a Time Series Model , 1978 .

[26]  J. Friedman Greedy function approximation: A gradient boosting machine. , 2001 .