Prototype selection for finding efficient representations of dissimilarity data

The nearest neighbor (NN) rule is a simple and intuitive method for solving classification problems. Originally, it uses distances to the complete training set. It performs well, however it is sensitive to noisy objects, due to its operation on local neighborhoods only. A more global approach is possible by mapping the distance data onto a pseudo-Euclidean space, such that the distances are preserved as well as possible. Then, a classifier built in such a space can outperform the NN rule. However, again all objects from the training set are used for the projection of new data. This paper addresses the issue of reducing the training set while possibly preserving the original structure of the mapped data. Some criteria are introduced and evaluated against two problems, polygon recognition and digit recognition. Our experiments show that the representation mismatch criterion is beneficial for the applications considered. Moreover, the linear classifier built in the pseudo-Euclidean space, determined by 20% - 25% of the training objects, outperforms the NN rule based on all of them.