Generalization in a linear perceptron in the presence of noise
暂无分享,去创建一个
[1] David Haussler,et al. What Size Net Gives Valid Generalization? , 1989, Neural Computation.
[2] F. Vallet,et al. Linear and Nonlinear Extension of the Pseudo-Inverse Solution for Learning Boolean Functions , 1989 .
[3] E. Gardner,et al. Three unfinished works on the optimal storage capacity of networks , 1989 .
[4] Yaser S. Abu-Mostafa,et al. The Vapnik-Chervonenkis Dimension: Information versus Complexity in Learning , 1989, Neural Computation.
[5] P. Réfrégier,et al. An Improved Version of the Pseudo-Inverse Solution for Classification and Neural Networks , 1989 .
[6] J. Hertz,et al. Phase transitions in simple learning , 1989 .
[7] D. J. Wallace,et al. Training with noise and the storage of correlated patterns in a neural network model , 1989 .
[8] Györgyi,et al. Inference of a rule by a neural network with thermal noise. , 1990, Physical review letters.
[9] The Langevin method in the statistical dynamics of learning , 1990 .
[10] M. Opper,et al. On the ability of the optimal perceptron to generalise , 1990 .
[11] Sompolinsky,et al. Learning from examples in large neural networks. , 1990, Physical review letters.
[12] Haim Sompolinsky,et al. Learning from Examples in a Single-Layer Neural Network , 1990 .
[13] Vijay K. Samalam,et al. Exhaustive Learning , 1990, Neural Computation.
[14] Opper,et al. Generalization performance of Bayes optimal classification algorithm for learning a perceptron. , 1991, Physical review letters.
[15] A. Krogh. Learning with noise in a linear perceptron , 1992 .