LEARNING THE UNLEARNABLE

Three neural network training algorithms are presented which are robust to nonlearnable problems. The first algorithm converges to the Gardner stability limit if the learning problem is linearly separable, and otherwise finds a locally maximally stable solution. The second algorithm is a robust version of Rosenblatt's perceptron learning algorithm which will converge to a solution of the learning problem if one exists, and otherwise will converge locally to a solution with a certain fraction of wrongly mapped patterns. The third algorithm is suited most favourably to unlearnable problems: it will always find a solution if the problem is learnable and otherwise it locally maximizes the number of patterns which are stored correctly. The error rate of this algorithm and other known algorithms for unlearnable problems are compared for two benchmark problems. Proofs of the existence of solutions are given. Convergence is proven as well to be global in the case of learnable and local in unlearnable cases.