Generalization in a linear perceptron in the presence of noise

The authors study the evolution of the generalization ability of a simple linear perceptron with N inputs which learns to imitate a 'teacher perceptron'. The system is trained on p= alpha N example inputs drawn from some distribution and the generalization ability is measured by the average agreement with the teacher on test examples drawn from the same distribution. The dynamics may be solved analytically and exhibits a phase transition from imperfect to perfect generalization at alpha =1, when there are no errors (static noise) in the training examples. If the examples are produced by an erroneous teacher, overfitting is observed, i.e. the generalization error starts to increase after a finite time of training. It is shown that a weight decay of the same size as the variance of the noise (errors) on the teacher improves on the generalization and suppresses the overfitting. The generalization error as a function of time is calculated numerically for various values of the parameters. Finally dynamic noise in the training is considered. White noise on the input corresponds on average to a weight decay, and can thus improve generalization, whereas white noise on the weights or the output degrades generalization. Generalization is particularly sensitive to noise on the weights (for alpha (1) where it makes the error constantly increase with time, but this effect is also shown to be damped by a weight decay. Weight noise and output noise acts similarly above the transition at alpha =1.

[1]  David Haussler,et al.  What Size Net Gives Valid Generalization? , 1989, Neural Computation.

[2]  F. Vallet,et al.  Linear and Nonlinear Extension of the Pseudo-Inverse Solution for Learning Boolean Functions , 1989 .

[3]  E. Gardner,et al.  Three unfinished works on the optimal storage capacity of networks , 1989 .

[4]  Yaser S. Abu-Mostafa,et al.  The Vapnik-Chervonenkis Dimension: Information versus Complexity in Learning , 1989, Neural Computation.

[5]  P. Réfrégier,et al.  An Improved Version of the Pseudo-Inverse Solution for Classification and Neural Networks , 1989 .

[6]  J. Hertz,et al.  Phase transitions in simple learning , 1989 .

[7]  D. J. Wallace,et al.  Training with noise and the storage of correlated patterns in a neural network model , 1989 .

[8]  Györgyi,et al.  Inference of a rule by a neural network with thermal noise. , 1990, Physical review letters.

[9]  The Langevin method in the statistical dynamics of learning , 1990 .

[10]  M. Opper,et al.  On the ability of the optimal perceptron to generalise , 1990 .

[11]  Sompolinsky,et al.  Learning from examples in large neural networks. , 1990, Physical review letters.

[12]  Haim Sompolinsky,et al.  Learning from Examples in a Single-Layer Neural Network , 1990 .

[13]  Vijay K. Samalam,et al.  Exhaustive Learning , 1990, Neural Computation.

[14]  Opper,et al.  Generalization performance of Bayes optimal classification algorithm for learning a perceptron. , 1991, Physical review letters.

[15]  A. Krogh Learning with noise in a linear perceptron , 1992 .