Comparing recurrent networks for time-series forecasting

This paper provides a comparison between two methods for time series forecasting. The first method is based on traditional recurrent neural networks (RNNs) while the second method is based in Reservoir Computing (RC). Reservoir Computing is a new paradigm that offers an intuitive methodology for using the temporal processing power of RNNs without the inconvenience of training them. So we decided to compare the advantages / disadvantages of using Reservoir Computing and RNNs in the problem of time series forecasting. The first method uses a Nonlinear Autoregressive Network with exogenous inputs (NARX). Optimization was carried out on the NARX architecture through an optimization procedure focused on the best mean squared error (MSE) metrics in the training set. The second method, called RCDESIGN, combines an evolutionary algorithm with Reservoir Computing and simultaneously looks for the best values of parameters, topology and weight matrices without rescaling the reservoir by the spectral radius. Nevertheless RCDESIGN has yielded fast tracking and excellent performance in some benchmark problems including the Narma and Mackey-Glass time-series.

[1]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[2]  Herbert Jaeger,et al.  The''echo state''approach to analysing and training recurrent neural networks , 2001 .

[3]  Benjamin Schrauwen,et al.  An experimental unification of reservoir computing methods , 2007, Neural Networks.

[4]  Harald Haas,et al.  Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication , 2004, Science.

[5]  Herbert Jaeger,et al.  Adaptive Nonlinear System Identification with Echo State Networks , 2002, NIPS.

[6]  Benjamin Schrauwen,et al.  Event detection and localization for small mobile robots using reservoir computing , 2008, Neural Networks.

[7]  Herbert Jaeger,et al.  A tutorial on training recurrent neural networks , covering BPPT , RTRL , EKF and the " echo state network " approach - Semantic Scholar , 2005 .

[8]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[9]  S. Quartz,et al.  Getting to Know You: Reputation and Trust in a Two-Person Economic Exchange , 2005, Science.

[10]  Herbert Jaeger,et al.  Reservoir computing approaches to recurrent neural network training , 2009, Comput. Sci. Rev..

[11]  Teresa Bernarda Ludermir,et al.  Comparing evolutionary methods for reservoir computing pre-training , 2011, The 2011 International Joint Conference on Neural Networks.

[12]  Benjamin Liebald,et al.  Exploration of effects of different network topologies on the ESN signal crosscorrelation matrix spectrum , 2004 .

[13]  D. Marquardt An Algorithm for Least-Squares Estimation of Nonlinear Parameters , 1963 .

[14]  T. van der Zant,et al.  Identification of motion with echo state network , 2004, Oceans '04 MTS/IEEE Techno-Ocean '04 (IEEE Cat. No.04CH37600).

[15]  Jochen J. Steil Stability of backpropagation-decorrelation efficient O(N) recurrent learning , 2005, ESANN.

[16]  Lutz Prechelt,et al.  PROBEN 1 - a set of benchmarks and benchmarking rules for neural network training algorithms , 1994 .

[17]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.