An Approach to Reduce the Computational Burden of Nearest Neighbor Classifier

Abstract Nearest Neighbor Classifiers demand high computational resources i.e, time and memory. Reducing of reference set(training set) and feature selection are two different approaches to this problem. This paper presents a method to reduce the training set both in cardinality and dimensionality in cascade. The experiments are done on several bench mark datasets and the results obtained are satisfactory.

[1]  P. Viswanath,et al.  Weighted k-Nearest Leader Classifier for Large Data Sets , 2007, PReMI.

[2]  T. Ravindra Babu,et al.  Compression Schemes for Mining Large Datasets , 2013, Advances in Computer Vision and Pattern Recognition.

[3]  M. Narasimha Murty,et al.  An incremental prototype set building technique , 2002, Pattern Recognit..

[4]  C. W. Swonger SAMPLE SET CONDENSATION FOR A CONDENSED NEAREST NEIGHBOR DECISION RULE FOR PATTERN RECOGNITION , 1972 .

[5]  I. Tomek,et al.  Two Modifications of CNN , 1976 .

[6]  Peter E. Hart,et al.  Nearest neighbor pattern classification , 1967, IEEE Trans. Inf. Theory.

[7]  Peter E. Hart,et al.  The condensed nearest neighbor rule (Corresp.) , 1968, IEEE Trans. Inf. Theory.

[8]  P. Viswanath,et al.  An improvement to k-nearest neighbor classifier , 2011, 2011 IEEE Recent Advances in Intelligent Computational Systems.

[9]  Lakhmi C. Jain,et al.  Nearest neighbor classifier: Simultaneous editing and feature selection , 1999, Pattern Recognit. Lett..

[10]  M. Narasimha Murty,et al.  Overlap pattern synthesis with an efficient nearest neighbor classifier , 2005, Pattern Recognit..