On-Line Learning of a Time-Dependent Rule
暂无分享,去创建一个
We study the learning of a time-dependent linearly separable rule in a neural network. The rule is represented by an N-vector performing a random walk. A single-layer perceptron is trained on-line using a Hebb-like algorithm with an additional weight decay. The evolution of the generalization error is calculated exactly in the thermodynamic limit N → ∞. We consider both, training examples which are drawn randomly and using a query strategy. The rule is never learnt perfectly, but can be tracked within a certain error level. Simulations confirm the analytic results.
[1] Wolfgang Kinzel,et al. Models of neural networks , 1987 .
[2] D. O. Hebb,et al. The organization of behavior , 1988 .