On Lyapunov Exponents for RNNs: Understanding Information Propagation Using Dynamical Systems Tools

Recurrent neural networks (RNNs) have been successfully applied to a variety of problems involving sequential data, but their optimization is sensitive to parameter initialization, architecture, and optimizer hyperparameters. Considering RNNs as dynamical systems, a natural way to capture stability, i.e., the growth and decay over long iterates, are the Lyapunov Exponents (LEs), which form the Lyapunov spectrum. The LEs have a bearing on stability of RNN training dynamics because forward propagation of information is related to the backward propagation of error gradients. LEs measure the asymptotic rates of expansion and contraction of nonlinear system trajectories, and generalize stability analysis to the time-varying attractors structuring the non-autonomous dynamics of data-driven RNNs. As a tool to understand and exploit stability of training dynamics, the Lyapunov spectrum fills an existing gap between prescriptive mathematical approaches of limited scope and computationally-expensive empirical approaches. To leverage this tool, we implement an efficient way to compute LEs for RNNs during training, discuss the aspects specific to standard RNN architectures driven by typical sequential datasets, and show that the Lyapunov spectrum can serve as a robust readout of training stability across hyperparameters. With this exposition-oriented contribution, we hope to draw attention to this understudied, but theoretically grounded tool for understanding training stability in RNNs.

[1]  V. Araújo Random Dynamical Systems , 2006, math/0608162.

[2]  Surya Ganguli,et al.  Exponential expressivity in deep neural networks through transient chaos , 2016, NIPS.

[3]  G. Benettin,et al.  Lyapunov Characteristic Exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them. Part 1: Theory , 1980 .

[4]  David J. Schwab,et al.  Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs , 2020, MSML.

[5]  Eric Shea-Brown,et al.  Chaos and reliability in balanced spiking networks with temporal drive. , 2012, Physical review. E, Statistical, nonlinear, and soft matter physics.

[6]  Rainer Engelken,et al.  Lyapunov spectra of chaotic recurrent neural networks , 2020, Physical Review Research.

[7]  Zhen Zhang,et al.  Convolutional Sequence to Sequence Model for Human Dynamics , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[8]  Greg Yang,et al.  Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation , 2019, ArXiv.

[9]  Yann LeCun,et al.  Orthogonal RNNs and Long-Memory Tasks , 2016, ArXiv.

[10]  W. Gerstner,et al.  Non-normal amplification in random balanced neuronal networks. , 2012, Physical review. E, Statistical, nonlinear, and soft matter physics.

[11]  F. Wolf,et al.  Dynamical entropy production in spiking neuron networks in the balanced state. , 2010, Physical review letters.

[12]  Surya Ganguli,et al.  Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice , 2017, NIPS.

[13]  Schuster,et al.  Suppressing chaos in neural networks by noise. , 1992, Physical review letters.

[14]  Surya Ganguli,et al.  The Emergence of Spectral Universality in Deep Networks , 2018, AISTATS.

[15]  Maximilian Puelma Touzel Cellular dynamics and stable chaos in balanced networks , 2016 .

[16]  Robert A. Legenstein,et al.  2007 Special Issue: Edge of chaos and prediction of computational performance for neural circuit models , 2007 .

[17]  Yoshua Bengio,et al.  Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics , 2019, NeurIPS.

[18]  Evangelos A. Theodorou,et al.  Deep Learning Theory Review: An Optimal Control and Dynamical Systems Perspective , 2019, ArXiv.

[19]  Thomas Laurent,et al.  A recurrent neural network without chaos , 2016, ICLR.

[20]  Fei-Fei Li,et al.  Visualizing and Understanding Recurrent Networks , 2015, ArXiv.

[21]  L. Dieci,et al.  Computation of a few Lyapunov exponents for continuous and discrete dynamical systems , 1995 .

[22]  Yang Zheng,et al.  R-FORCE: Robust Learning for Random Recurrent Neural Networks , 2020, ArXiv.

[23]  Yoshua Bengio,et al.  Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.

[24]  Razvan Pascanu,et al.  On the difficulty of training recurrent neural networks , 2012, ICML.

[25]  Samuel S. Schoenholz,et al.  Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs , 2019, ArXiv.

[26]  Moritz Helias,et al.  Optimal Sequence Memory in Driven Random Networks , 2016, Physical Review X.

[27]  Yann LeCun,et al.  Recurrent Orthogonal Networks and Long-Memory Tasks , 2016, ICML.