暂无分享,去创建一个
[1] L. Ronkin. Liouville's theorems for functions holomorphic on the zero set of a polynomial , 1979 .
[2] George Cybenko,et al. Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..
[3] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[4] Allan Pinkus,et al. Multilayer Feedforward Networks with a Non-Polynomial Activation Function Can Approximate Any Function , 1991, Neural Networks.
[5] F. Jones. Lebesgue Integration on Euclidean Space , 1993 .
[6] Yoshua Bengio,et al. Convolutional networks for images, speech, and time series , 1998 .
[7] W. Hackbusch,et al. A New Scheme for the Tensor Representation , 2009 .
[8] Tamara G. Kolda,et al. Tensor Decompositions and Applications , 2009, SIAM Rev..
[9] Dima Damen,et al. Recognizing linked events: Searching the space of feasible explanations , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[10] Amir Yehudayoff,et al. Arithmetic Circuits: A survey of recent results and open questions , 2010, Found. Trends Theor. Comput. Sci..
[11] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[12] Yoshua Bengio,et al. Shallow vs. Deep Sum-Product Networks , 2011, NIPS.
[13] Pedro M. Domingos,et al. Sum-product networks: A new deep architecture , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).
[14] Yair Weiss,et al. "Natural Images, Gaussian Mixtures and Dead Leaves" , 2012, NIPS.
[15] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[16] W. Hackbusch. Tensor Spaces and Numerical Tensor Calculus , 2012, Springer Series in Computational Mathematics.
[17] Razvan Pascanu,et al. On the number of response regions of deep feed forward networks with piece-wise linear activations , 2013, 1312.6098.
[18] James Martens,et al. On the Expressive Efficiency of Sum Product Networks , 2014, ArXiv.
[19] Amnon Shashua,et al. SimNets: A Generalization of Convolutional Networks , 2014, ArXiv.
[20] Yelong Shen,et al. Learning semantic representations using convolutional neural networks for web search , 2014, WWW.
[21] Ming Yang,et al. DeepFace: Closing the Gap to Human-Level Performance in Face Verification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[22] Razvan Pascanu,et al. On the Number of Linear Regions of Deep Neural Networks , 2014, NIPS.
[23] Razvan Pascanu,et al. On the number of inference regions of deep feed forward networks with piece-wise linear activations , 2013, ICLR.
[24] Nadav Cohen,et al. On the Expressive Power of Deep Learning: A Tensor Analysis , 2015, COLT 2016.
[25] Tomaso Poggio,et al. I-theory on depth vs width: hierarchical function composition , 2015 .
[26] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[27] Amos J. Storkey,et al. Training Deep Convolutional Neural Networks to Play Go , 2015, ICML.
[28] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[30] Izhar Wallach,et al. AtomNet: A Deep Convolutional Neural Network for Bioactivity Prediction in Structure-based Drug Discovery , 2015, ArXiv.
[31] Matus Telgarsky,et al. Benefits of Depth in Neural Networks , 2016, COLT.
[32] Amnon Shashua,et al. Deep SimNets , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Ohad Shamir,et al. The Power of Depth for Feedforward Neural Networks , 2015, COLT.