Nonlinear random matrix theory for deep learning
暂无分享,去创建一个
[1] V. Marčenko,et al. DISTRIBUTION OF EIGENVALUES FOR SOME SETS OF RANDOM MATRICES , 1967 .
[2] Sompolinsky,et al. Spin-glass models of neural networks. , 1985, Physical review. A, General physics.
[3] E. Gardner,et al. Optimal storage properties of neural network models , 1988 .
[4] Geoffrey E. Hinton,et al. Bayesian Learning for Neural Networks , 1995 .
[5] Benjamin Recht,et al. Random Features for Large-Scale Kernel Machines , 2007, NIPS.
[6] Noureddine El Karoui,et al. The spectrum of kernel random matrices , 2010, 1001.0492.
[7] Xiuyuan Cheng,et al. THE SPECTRUM OF RANDOM INNER-PRODUCT KERNEL MATRICES , 2012, 1202.3155.
[8] Tara N. Sainath,et al. FUNDAMENTAL TECHNOLOGIES IN MODERN SPEECH RECOGNITION Digital Object Identifier 10.1109/MSP.2012.2205597 , 2012 .
[9] T. Tao. Topics in Random Matrix Theory , 2012 .
[10] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[11] Thomas Dupic,et al. Spectral density of products of Wishart dilute random matrices. Part I: the dense case , 2014, 1401.7802.
[12] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.
[13] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[14] Yann LeCun,et al. The Loss Surfaces of Multilayer Networks , 2014, AISTATS.
[15] Surya Ganguli,et al. Exponential expressivity in deep neural networks through transient chaos , 2016, NIPS.
[16] Yoram Singer,et al. Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity , 2016, NIPS.
[17] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[18] Heiga Zen,et al. WaveNet: A Generative Model for Raw Audio , 2016, SSW.
[19] Surya Ganguli,et al. On the Expressive Power of Deep Neural Networks , 2016, ICML.
[20] Zhenyu Liao,et al. A Random Matrix Approach to Neural Networks , 2017, ArXiv.
[21] Surya Ganguli,et al. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice , 2017, NIPS.
[22] Jascha Sohl-Dickstein,et al. A Correspondence Between Random Neural Networks and Statistical Field Theory , 2017, ArXiv.