On-Line Learning with a Perceptron

We study on-line learning of a linearly separable rule with a simple perceptron. Training utilizes a sequence of uncorrelated, randomly drawn N-dimensional input examples. In the thermodynamic limit the generalization error after training such examples with P can be calculated exactly. For the standard perceptron algorithm it decrease like (N/P)1/3 for large P/N, in contrast to the faster (N/P)1/2-behaviour of the so-called Hebbian learning. Furthermore, we show that a specific parameter-free on-line scheme, the AdaTron algorithm, gives an asymptotic (N/P)-decay of the generalization error. This coincides (up to a constant factor) with the bound for any training process based on random examples, including off-line learning. Simulations confirm our results.