In many real-life pattern recognition problems, it may be prudent to reject an example rather than run the risk of a costly potential misclassification. Typically, the threshold for rejection is determined after the underlying classifier has been trained in order to minimize misclassification. In this paper, we present two algorithms to train a hyperplane with a bandwidth for rejection, wherein both the classifier and the bandwidth are determined simultaneously. Experimental results indicate that the hypothesis thus detrmined shows an improvement over its liberal counterpart (the same perceptron, with zero bandwidth), and over a perceptron trained using the standard perceptron learning rule, on which the rejection threshold is determined using Chow’s rule. Key-Words: -Perceptron learning, rejection threshold, emdedded reject option
[1]
W. Hoeffding.
Probability Inequalities for sums of Bounded Random Variables
,
1963
.
[2]
C. K. Chow,et al.
On optimum recognition error and reject tradeoff
,
1970,
IEEE Trans. Inf. Theory.
[3]
Fabio Roli,et al.
Support Vector Machines with Embedded Reject Option
,
2002,
SVM.
[4]
Fabio Roli,et al.
Multiple Reject Thresholds for Improving Classification Reliability
,
2000,
SSPR/SPR.
[5]
Léon Bottou,et al.
Local Learning Algorithms
,
1992,
Neural Computation.
[6]
John Shawe-Taylor,et al.
The Perceptron Algorithm with Uneven Margins
,
2002,
ICML.