Learning with noise in a linear perceptron

The learning of a set of p random patterns in a linear perceptron is studied in the limit of a large (N) of input units with noise on the weights, inputs and output. The problem is formulated in continuous time as a Langevin equation, and the first task is to evaluate the response or Green function for the system. White noise on the output is shown to correspond to spatially correlated weight noise acting only in a subspace of the weight space. It is shown that the input noise acts as a simple weight decay with a size proportional to the load parameter alpha =p/N. With no weight decay, the relaxation time diverges at alpha =1. With a weight decay it becomes shorter, and finite for alpha =1, but at the cost of a larger asymptotic learning error that is found analytically. It is shown that a small weight decay decreases the effect of noise on the weights or outputs.