Prototype optimization for nearest neighbor classifiers using a two-layer perceptron

Abstract The performance of a nearest neighbor classifier is degraded if only a small number of training samples are used as prototypes. An algorithm is presented for modifying the prototypes so that the classification rate can be increased. This algorithm makes use of a two-layer perceptron with one second-order input. Each hidden node of the perceptron represents a prototype and the weights of connections between a hidden node and the input nodes are initially set equal to the feature values of the corresponding prototype. The weights are then changed using a gradient-based algorithm to generate a new prototype. The algorithm has been tested with good results.

[1]  Michael T. Manry,et al.  Iterative improvement of a nearest neighbor classifier , 1991, Neural Networks.

[2]  I. Tomek,et al.  Two Modifications of CNN , 1976 .

[3]  Hong Yan A two-layer perceptron for nearest neighbor classifier optimization , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[4]  Peter E. Hart,et al.  The condensed nearest neighbor rule (Corresp.) , 1968, IEEE Trans. Inf. Theory.

[5]  G. Gates,et al.  The reduced nearest neighbor rule (Corresp.) , 1972, IEEE Trans. Inf. Theory.

[6]  Hugh B. Woodruff,et al.  An algorithm for a selective nearest neighbor decision rule (Corresp.) , 1975, IEEE Trans. Inf. Theory.

[7]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[8]  Chin-Liang Chang,et al.  Finding Prototypes For Nearest Neighbor Classifiers , 1974, IEEE Transactions on Computers.

[9]  C. G. Hilborn,et al.  The Condensed Nearest Neighbor Rule , 1967 .