Calculation of learning curves for inconsistent algorithms.
暂无分享,去创建一个
The training and generalization errors of three well-known leaning algorithms are calculated using methods of statistical physics. We focus in particular on inconsistent algorithms that are unable to perfectly classify the training examples, and show that the asymptotic behavior of these algorithms is different from the case of consistent algorithms. Our results are in agreement with bounds derived by computational learning theorists. We further find that the replica-symmetric theory is stable everywhere for two of the algorithms studied, which leads us to conjecture that it is the exact solution in these cases. We also demonstrate that one of the algorithms studied performs almost indistinguishably from the Bayes learning algorithm, while having the advantage of being implementable in a single-layer network