暂无分享,去创建一个
[1] Reinder Banning,et al. SAMPLING THEORY , 2012 .
[2] E. Gardner. The space of interactions in neural network models , 1988 .
[3] R. Vershynin. Estimation in High Dimensions: A Geometric Perspective , 2014, 1405.5103.
[4] Yoshua. Bengio,et al. Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..
[5] E. Gardner,et al. Maximum Storage Capacity in Neural Networks , 1987 .
[6] David D. Cox,et al. Untangling invariant object recognition , 2007, Trends in Cognitive Sciences.
[7] B. F. Beck,et al. The what? , 1986 .
[8] Daniel D. Lee,et al. Learning Data Manifolds with a Cutting Plane Method , 2017, Neural Computation.
[9] J. S. Wholey. IEEE Transactions on Electronic Computers , 1963 .
[10] Daniel L. K. Yamins,et al. Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition , 2014, PLoS Comput. Biol..
[11] Rémi Monasson,et al. Properties of neural networks storing spatially correlated patterns , 1992 .
[12] Isabelle K. Carter,et al. Dynamics of Learning. , 1947 .
[13] G. C. Shephard,et al. Convex Polytopes , 1969, The Mathematical Gazette.
[14] H. Sebastian Seung,et al. The Manifold Ways of Perception , 2000, Science.
[15] Y. Cohen,et al. The what, where and how of auditory-object perception , 2013, Nature Reviews Neuroscience.
[16] F Gerl,et al. Storage capacity and optimal learning of Potts-model perceptrons by a cavity method* , 1994 .
[17] S T Roweis,et al. Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.
[18] Haim Sompolinsky,et al. Optimal Degrees of Synaptic Connectivity , 2017, Neuron.
[19] Haim Sompolinsky,et al. Linear readout of object manifolds. , 2015, Physical review. E.
[20] J. Tenenbaum,et al. A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.
[21] Vladimir Vapnik,et al. Statistical learning theory , 1998 .
[22] M. Opper,et al. Storage of correlated patterns in a perceptron , 1995 .
[23] Surya Ganguli,et al. Exponential expressivity in deep neural networks through transient chaos , 2016, NIPS.
[24] 丸山 徹. Convex Analysisの二,三の進展について , 1977 .
[25] L. Abbott,et al. Stimulus-dependent suppression of chaos in recurrent neural networks. , 2009, Physical review. E, Statistical, nonlinear, and soft matter physics.
[26] J. Herskowitz,et al. Proceedings of the National Academy of Sciences, USA , 1996, Current Biology.
[27] Marc'Aurelio Ranzato,et al. Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.
[28] Quoc V. Le,et al. Measuring Invariances in Deep Networks , 2009, NIPS.
[29] W. Marsden. I and J , 2012 .
[30] E SEUNG-HYEOKKY. FACES FOR TWO-QUBIT SEPARABLE STATES AND THE CONVEX HULLS OF TRIGONOMETRIC MOMENT CURVES , 2013 .
[31] Mirta B. Gordon,et al. Statistical mechanics of learning with soft margin classifiers. , 2001, Physical review. E, Statistical, nonlinear, and soft matter physics.
[32] James J. DiCarlo,et al. How Does the Brain Solve Visual Object Recognition? , 2012, Neuron.
[33] Thomas M. Cover,et al. Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition , 1965, IEEE Trans. Electron. Comput..
[34] Zeev Smilansky. Convex hulls of generalized moment curves , 1985 .
[35] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Kevin M Franks,et al. Complementary codes for odor identity and intensity in olfactory cortex , 2017, eLife.
[37] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[38] Shun-ichi Amari,et al. Four Types of Learning Curves , 1992, Neural Computation.
[39] George Loizou,et al. Computer vision and pattern recognition , 2007, Int. J. Comput. Math..
[40] Mark Rudelson,et al. Convex bodies with minimal mean width , 2000 .