On-Line Learning of a Time-Dependent Rule

We study the learning of a time-dependent linearly separable rule in a neural network. The rule is represented by an N-vector performing a random walk. A single-layer perceptron is trained on-line using a Hebb-like algorithm with an additional weight decay. The evolution of the generalization error is calculated exactly in the thermodynamic limit N → ∞. We consider both, training examples which are drawn randomly and using a query strategy. The rule is never learnt perfectly, but can be tracked within a certain error level. Simulations confirm the analytic results.