Learning with a Temperature-Dependent Algorithm

We analyse the properties of a new learning algorithm for binary perceptrons based on the minimization of a temperature-dependent differentiable cost function. We show that learning at finite temperature increases the stabilities of learned patterns, endowing the perceptron with robustness, at the price of accepting a small fraction of errors in the learning set. If the temperature is appropriately chosen, our algorithm approaches the optimal generalization performance for linearly separable functions. Therefore, by controlling the learning temperature, this algorithm solves the main practical problem of perceptron learning, i.e. that of finding the best weights, independently of the nature of the learning set.