暂无分享,去创建一个
Eli Shlizerman | Ryan Vogt | Guillaume Lajoie | Maximilian Puelma Touzel | Ryan H. Vogt | Guillaume Lajoie | M. P. Touzel | Eli Shlizerman
[1] V. Araújo. Random Dynamical Systems , 2006, math/0608162.
[2] Surya Ganguli,et al. Exponential expressivity in deep neural networks through transient chaos , 2016, NIPS.
[3] G. Benettin,et al. Lyapunov Characteristic Exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them. Part 1: Theory , 1980 .
[4] David J. Schwab,et al. Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs , 2020, MSML.
[5] Eric Shea-Brown,et al. Chaos and reliability in balanced spiking networks with temporal drive. , 2012, Physical review. E, Statistical, nonlinear, and soft matter physics.
[6] Rainer Engelken,et al. Lyapunov spectra of chaotic recurrent neural networks , 2020, Physical Review Research.
[7] Zhen Zhang,et al. Convolutional Sequence to Sequence Model for Human Dynamics , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[8] Greg Yang,et al. Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation , 2019, ArXiv.
[9] Yann LeCun,et al. Orthogonal RNNs and Long-Memory Tasks , 2016, ArXiv.
[10] W. Gerstner,et al. Non-normal amplification in random balanced neuronal networks. , 2012, Physical review. E, Statistical, nonlinear, and soft matter physics.
[11] F. Wolf,et al. Dynamical entropy production in spiking neuron networks in the balanced state. , 2010, Physical review letters.
[12] Surya Ganguli,et al. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice , 2017, NIPS.
[13] Schuster,et al. Suppressing chaos in neural networks by noise. , 1992, Physical review letters.
[14] Surya Ganguli,et al. The Emergence of Spectral Universality in Deep Networks , 2018, AISTATS.
[15] Maximilian Puelma Touzel. Cellular dynamics and stable chaos in balanced networks , 2016 .
[16] Robert A. Legenstein,et al. 2007 Special Issue: Edge of chaos and prediction of computational performance for neural circuit models , 2007 .
[17] Yoshua Bengio,et al. Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics , 2019, NeurIPS.
[18] Evangelos A. Theodorou,et al. Deep Learning Theory Review: An Optimal Control and Dynamical Systems Perspective , 2019, ArXiv.
[19] Thomas Laurent,et al. A recurrent neural network without chaos , 2016, ICLR.
[20] Fei-Fei Li,et al. Visualizing and Understanding Recurrent Networks , 2015, ArXiv.
[21] L. Dieci,et al. Computation of a few Lyapunov exponents for continuous and discrete dynamical systems , 1995 .
[22] Yang Zheng,et al. R-FORCE: Robust Learning for Random Recurrent Neural Networks , 2020, ArXiv.
[23] Yoshua Bengio,et al. Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.
[24] Razvan Pascanu,et al. On the difficulty of training recurrent neural networks , 2012, ICML.
[25] Samuel S. Schoenholz,et al. Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs , 2019, ArXiv.
[26] Moritz Helias,et al. Optimal Sequence Memory in Driven Random Networks , 2016, Physical Review X.
[27] Yann LeCun,et al. Recurrent Orthogonal Networks and Long-Memory Tasks , 2016, ICML.