Generalization ability of perceptrons with continuous outputs.
暂无分享,去创建一个
[1] James L. McClelland,et al. Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .
[2] Opper,et al. Learning of correlated patterns in spin-glass networks by local learning rules. , 1987, Physical review letters.
[3] Kanter,et al. Associative recall of memory without errors. , 1987, Physical review. A, General physics.
[4] F. Vallet,et al. Linear and Nonlinear Extension of the Pseudo-Inverse Solution for Learning Boolean Functions , 1989 .
[5] M. Opper,et al. On the ability of the optimal perceptron to generalise , 1990 .
[6] Wolfgang Kinzel,et al. Improving a Network Generalization Ability by Selecting Examples , 1990 .
[7] Petri Koistinen,et al. Using additive noise in back-propagation training , 1992, IEEE Trans. Neural Networks.
[8] Sompolinsky,et al. Statistical mechanics of learning from examples. , 1992, Physical review. A, Atomic, molecular, and optical physics.
[9] A. Krogh. Learning with noise in a linear perceptron , 1992 .
[10] T. Watkin,et al. Selecting examples for perceptrons , 1992 .
[11] J. Hertz,et al. Generalization in a linear perceptron in the presence of noise , 1992 .