Self-Adjusting Reject Options in Prototype Based Classification

Reject options in classification play a major role whenever the costs of a misclassification are higher than the costs to postpone the decision, prime examples being safety critical systems, medical diagnosis, or models which rely on user interaction and user acceptance. While optimum reject options can be computed analytically in case of a probabilistic generative classification model, it is not clear how to optimally integrate reject strategies into efficient deterministic counterparts, such as popular learning vector quantization (LVQ). Recently, first techniques propose promising a posteriori strategies for an efficient reject in such cases (Fischer et al. Neurocomputing, 2015 [7]). In this contribution, we take a different point of view and formalize optimum reject via an integrated cost function. We show that an efficient approximation of these costs together with a geometric reject rule leads to an extension of LVQ which not only aligns the classification model along the reject costs but also self-adjusts an optimum reject threshold while training.