暂无分享,去创建一个
Surya Ganguli | Jon M. Kleinberg | Ben Poole | Maithra Raghu | Jascha Sohl-Dickstein | Ben Poole | S. Ganguli | M. Raghu | J. Kleinberg | Jascha Sohl-Dickstein | Jascha Narain Sohl-Dickstein
[1] Norbert Sauer,et al. On the Density of Families of Sets , 1972, J. Comb. Theory A.
[2] D. Kershaw. Some extensions of W. Gautschi’s inequalities for the gamma function , 1983 .
[3] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[4] Eduardo Sontag,et al. A Comparison of the Computational Power of Sigmoid and Boolean Threshold Circuits , 1994 .
[5] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[6] Eduardo Sontag. VC dimension of neural networks , 1998 .
[7] Peter L. Bartlett,et al. Almost Linear VC-Dimension Bounds for Piecewise Polynomial Networks , 1998, Neural Computation.
[8] Peter L. Bartlett,et al. Vapnik-Chervonenkis dimension of neural nets , 2003 .
[9] G. Lewicki,et al. Approximation by Superpositions of a Sigmoidal Function , 2003 .
[10] Samy Bengio,et al. Links between perceptrons, MLPs and SVMs , 2004, ICML.
[11] Peter Orlik. Hyperplane Arrangements , 2009, Encyclopedia of Optimization.
[12] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[13] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[14] R. Zemel,et al. On the Representational Efficiency of Restricted Boltzmann Machines , 2013, NIPS 2013.
[15] Inci Ege,et al. On Some Inequalities for the Gamma Function , 2013 .
[16] Franco Scarselli,et al. On the Complexity of Neural Network Classifiers: A Comparison Between Shallow and Deep Architectures , 2014, IEEE Transactions on Neural Networks and Learning Systems.
[17] Panos Siafarikas. On Some Inequalities for the Gamma Function , 2014 .
[18] P. Baldi,et al. Searching for exotic particles in high-energy physics with deep learning , 2014, Nature Communications.
[19] Surya Ganguli,et al. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization , 2014, NIPS.
[20] Razvan Pascanu,et al. On the Number of Linear Regions of Deep Neural Networks , 2014, NIPS.
[21] Razvan Pascanu,et al. On the number of inference regions of deep feed forward networks with piece-wise linear activations , 2013, ICLR.
[22] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[23] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[24] Nadav Cohen,et al. On the Expressive Power of Deep Learning: A Tensor Analysis , 2015, COLT 2016.
[25] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[26] Oriol Vinyals,et al. Qualitatively characterizing neural network optimization problems , 2014, ICLR.
[27] Yann LeCun,et al. The Loss Surfaces of Multilayer Networks , 2014, AISTATS.
[28] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[29] Roger B. Grosse,et al. Optimizing Neural Networks with Kronecker-factored Approximate Curvature , 2015, ICML.
[30] Matus Telgarsky,et al. Representation Benefits of Deep Feedforward Networks , 2015, ArXiv.
[31] Leonidas J. Guibas,et al. Deep Knowledge Tracing , 2015, NIPS.
[32] Yoram Singer,et al. Train faster, generalize better: Stability of stochastic gradient descent , 2015, ICML.
[33] Surya Ganguli,et al. Exponential expressivity in deep neural networks through transient chaos , 2016, NIPS.
[34] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[35] Vivek Srikumar,et al. Expressiveness of Rectifier Networks , 2015, ICML.
[36] Ohad Shamir,et al. The Power of Depth for Feedforward Neural Networks , 2015, COLT.
[37] Yoram Singer,et al. Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity , 2016, NIPS.
[38] Tomaso Poggio,et al. Learning Functions: When Is Deep Better Than Shallow , 2016, 1603.00988.
[39] Tomaso A. Poggio,et al. Learning Real and Boolean Functions: When Is Deep Better Than Shallow , 2016, ArXiv.
[40] Finn Macleod. Unreasonable Effectivness of Deep Learning , 2018, ArXiv.