Time-Delay Reservoir Computers and High-Speed Information Processing Capacity

The aim of this presentation is to show how various ideas coming from the nonlinear stability theory of functional differential systems, stochastic modeling, and machine learning, can be put together in order to create an approximating model that explains the working mechanisms behind a certain type of reservoir computers. Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus on time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. The reservoir design problem is addressed, which remains the biggest challenge in the applicability of this information processing scheme. Our results use the information available regarding the optimal reservoir working regimes in order to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.

[1]  Harald Haas,et al.  Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication , 2004, Science.

[2]  Wolfgang Maass,et al.  Liquid State Machines: Motivation, Theory, and Applications , 2010 .

[3]  José Manuel Gutiérrez,et al.  Memory and Nonlinear Mapping in Reservoir Computing with Two Uncoupled Nonlinear Delay Nodes , 2013 .

[4]  L. Glass,et al.  Oscillation and chaos in physiological control systems. , 1977, Science.

[5]  Daniel Brunner,et al.  Parallel photonic information processing at gigabyte per second data rates using transient states , 2013, Nature Communications.

[6]  Benjamin Schrauwen,et al.  Information Processing Capacity of Dynamical Systems , 2012, Scientific Reports.

[7]  Juan-Pablo Ortega,et al.  Quantitative evaluation of the performance of discrete-time reservoir computers in the forecasting, filtering, and reconstruction of stochastic stationary signals , 2015 .

[8]  L Pesquera,et al.  Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing. , 2012, Optics express.

[9]  José Manuel Gutiérrez,et al.  Simple reservoirs with chain topology based on a single time-delay nonlinear node , 2012, ESANN.

[10]  L. Appeltant,et al.  Information processing using a single dynamical node as complex system , 2011, Nature communications.

[11]  Herbert Jaeger,et al.  Optimization and applications of echo state networks with leaky- integrator neurons , 2007, Neural Networks.

[12]  Laurent Larger,et al.  Stochastic Nonlinear Time Series Forecasting Using Time-Delay Reservoir Computers: Performance and Universality , 2013, Neural Networks.

[13]  Benjamin Schrauwen,et al.  Optoelectronic Reservoir Computing , 2011, Scientific Reports.

[14]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[15]  Peter Tiño,et al.  Minimum Complexity Echo State Network , 2011, IEEE Transactions on Neural Networks.

[16]  Laurent Larger,et al.  Nonlinear Memory Capacity of Parallel Time-Delay Reservoir Computers in the Processing of Multidimensional Signals , 2015, Neural Computation.

[17]  K. Ikeda Multiple-valued stationary state and its instability of the transmitted light by a ring cavity system , 1979 .

[18]  Amir F. Atiya,et al.  New results on recurrent network training: unifying the algorithms and accelerating convergence , 2000, IEEE Trans. Neural Networks Learn. Syst..

[19]  Laurent Larger,et al.  Optimal nonlinear information processing capacity in delay-based reservoir computers , 2014, Scientific Reports.