The recently proposed Bayesian approach to online learning is applied to learning a rule deened as a noisy single layer perceptron with either continuous or binary weights. In the Bayesian online approach the exact posterior distribution is approximated by a simpler paramet-ric posterior that is updated online as new examples are incorporated to the dataset. In the case of continuous weights, the approximate posterior is chosen to be Gaussian. The computational complexity of the resulting online algorithm is found to be at least as high as that of the Bayesian ooine approach, making the online approach less attractive. A Hebbian approximation based on casting the full covariance matrix into an isotropic diagonal form signiicantly reduces the computational complexity and yields a previously identiied optimal Hebbian algorithm. In the case of binary weights, the approximate posterior is chosen to be a biased binary distribution. The resulting online algorithm is derived and shown to outperform several other online approaches to this problem.
[1]
Opper.
On-line versus Off-line Learning from Random Examples: General Results.
,
1996,
Physical review letters.
[2]
Sara A. Sollaconnect,et al.
Optimal Bayesian Online
,
1998
.
[3]
Nestor Caticha,et al.
Functional optimization of online algorithms in multilayer neural networks
,
1997
.
[4]
Training Binary Perceptrons by Clipping
,
1995
.
[5]
Saad,et al.
Exact solution for on-line learning in multilayer neural networks.
,
1995,
Physical review letters.
[6]
Opper,et al.
Mean field approach to Bayes learning in feed-forward neural networks.
,
1996,
Physical review letters.
[7]
Magnus Rattray,et al.
Globally optimal parameters for on-line learning in multilayer neural networks
,
1997
.
[8]
O. Kinouchi,et al.
Optimal generalization in perceptions
,
1992
.