Encoding of a Chaotic Attractor in a Reservoir Computer: A Directional Fiber Investigation

In this work, we study the dynamical properties of a machine learning technique called reservoir computing in order to gain insight into how representations of chaotic signals are encoded through learning. We train the reservoir on individual chaotic Lorenz signals. The Lorenz system is characterized by a set of equations and known to have three fixed points, all of which are unstable in the chaotic regime of the strange attractor. Exploration of the fixed points of the reservoir whose outputs are trained allows us to understand whether inherent Lorenz dynamics are transposed onto reservoir dynamics during learning. We do so by using a novel fixed point finding technique called directional fibers. Directional fibers are mathematical objects that systematically locate fixed points in a high dimensional space, and are found to be competitive and complementary with other traditional approaches. We find that the reservoir, after training of output weights, contains a higher dimensional projection of the Lorenz fixed points with matching stability, even though the training data did not include the fixed points. This tells us that the reservoir does indeed learn dynamical properties of the Lorenz attractor. We also find that the directional fiber also identifies additional fixed points in the reservoir space outside the projected Lorenz attractor region; these amplify perturbations during prediction and play a role in failure of long-term time series prediction.

[1]  R. Brockett,et al.  Reservoir observers: Model-free inference of unmeasured variables in chaotic systems. , 2017, Chaos.

[2]  Michelle Girvan,et al.  Similarity Learning and Generalization with Limited Data: A Reservoir Computing Approach , 2018, Complex..

[3]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[4]  James A. Reggia,et al.  Learning in a Continuous-Valued Attractor Network , 2018, 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA).

[5]  Benjamin Schrauwen,et al.  Stable Output Feedback in Reservoir Computing Using Ridge Regression , 2008, ICANN.

[6]  Jaideep Pathak,et al.  Model-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir Computing Approach. , 2018, Physical review letters.

[7]  Benjamin Schrauwen,et al.  An experimental unification of reservoir computing methods , 2007, Neural Networks.

[8]  Sung Kyu Lim,et al.  Design and Architectural Co-optimization of Monolithic 3D Liquid State Machine-based Neuromorphic Processor , 2018, 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC).

[9]  Herbert Jaeger,et al.  Reservoir computing approaches to recurrent neural network training , 2009, Comput. Sci. Rev..

[10]  Jungwon Lee,et al.  Residual LSTM: Design of a Deep Recurrent Architecture for Distant Speech Recognition , 2017, INTERSPEECH.

[11]  José Carlos Príncipe,et al.  Analysis and Design of Echo State Networks , 2007, Neural Computation.

[12]  Jaideep Pathak,et al.  Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data. , 2017, Chaos.

[13]  Geert Morthier,et al.  Experimental demonstration of reservoir computing on a silicon photonics chip , 2014, Nature Communications.

[14]  Herbert Jaeger,et al.  Controlling Recurrent Neural Networks by Conceptors , 2014, ArXiv.

[15]  Henry Markram,et al.  Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations , 2002, Neural Computation.

[16]  James A. Reggia,et al.  Using Directional Fibers to Locate Fixed Points of Recurrent Neural Networks , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[17]  Paul Manneville,et al.  Intermittency and the Lorenz model , 1979 .

[18]  Zehong Yang,et al.  Short-term stock price prediction based on echo state networks , 2009, Expert Syst. Appl..

[19]  Roland Badeau,et al.  Singing voice detection with deep recurrent neural networks , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[20]  Sander Dieleman,et al.  Beyond Temporal Pooling: Recurrence and Temporal Convolutions for Gesture Recognition in Video , 2015, International Journal of Computer Vision.

[21]  E. Lorenz Deterministic nonperiodic flow , 1963 .

[22]  John J. Hopfield,et al.  Neural networks and physical systems with emergent collective computational abilities , 1999 .

[23]  Herbert Jaeger,et al.  The''echo state''approach to analysing and training recurrent neural networks , 2001 .

[24]  Wang Jilong,et al.  Short-term wind speed forecasting based on spectral clustering and optimised echo state networks , 2015 .

[25]  Peng Li,et al.  SSO-LSM: A Sparse and Self-Organizing architecture for Liquid State Machine based neural processors , 2016, 2016 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH).