The fractional correction rule: a new perspective

In this paper we discover and explore a useful property of the fractional correction rule, a variation of the perceptron learning rule, for a single neural unit. We rename this rule the projection learning rule (PrLR), which seems more appropriate because of the technique that we use to prove convergence. We state and prove a more powerful convergence theorem and establish the link with S. Agmon's work (1954) on linear inequalities. The hallmark of this rule is that if the problem is not linearly separable then the proposed rule will always converge to the origin of the weight-space. This points out that appropriate nonlinear methods (e.g., multilayer neural network, nonlinear transformation on input-space etc.) should be used to address this problem. On the other hand, if the patterns are linearly separable, the performance of this rule is equivalent to that of the perceptron rule. A theoretical investigation of this rule leads to interesting observations. We present experimental results with linearly separable as well as linearly non-separable data using the PrLR and compare its performance with that of the perceptron rule.