Computing Optimal Attribute Weight Settings for Nearest Neighbor Algorithms

Nearest neighbor (NN) learning algorithms, examples of the lazy learning paradigm, rely on a distance function to measure the similarity of testing examples with the stored training examples. Since certain attributes are more discriminative, while others can be less or totally irrelevant, attributes should be weighed differently in the distance function. Most previous studies on weight setting for NN learning algorithms are empirical. In this paper we describe our attempt on deciding theoretically optimal weights that minimize the predictive error for NN algorithms. Assuming a uniform distribution of examples in a 2-d continuous space, we first derive the average predictive error introduced by a linear classification boundary, and then determine the optimal weight setting for any polygonal classification region. Our theoretical results of optimal attribute weights can serve as a baseline or lower bound for comparing other empirical weight setting methods.

[1]  David W. Aha,et al.  Weighting Features , 1995, ICCBR.

[2]  Bruce W. Char,et al.  First Leaves: A Tutorial Introduction to Maple V , 1992 .

[3]  Keinosuke Fukunaga,et al.  Introduction to statistical pattern recognition (2nd ed.) , 1990 .

[4]  Seishi Okamoto,et al.  An Average-Case Analysis of k-Nearest Neighbor Classifier , 1995, ICCBR.

[5]  Steven Salzberg,et al.  A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features , 2004, Machine Learning.

[6]  D. Kibler,et al.  Instance-based learning algorithms , 2004, Machine Learning.

[7]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[8]  Keinosuke Fukunaga,et al.  Introduction to Statistical Pattern Recognition , 1972 .

[9]  Andrew W. Moore,et al.  Efficient Algorithms for Minimizing Cross Validation Error , 1994, ICML.

[10]  David W. Aha,et al.  Feature Selection for Case-Based Classification of Cloud Types: An Empirical Comparison , 1994 .

[11]  Ken Satoh,et al.  Toward PAC-Learning of Weights from Qualitative Distance Information , 1994 .

[12]  David W. Aha,et al.  Learning Representative Exemplars of Concepts: An Initial Case Study , 1987 .

[13]  P. Langley,et al.  Average-case analysis of a nearest neighbor algorthim , 1993, IJCAI 1993.

[14]  Thomas G. Dietterich,et al.  An experimental comparison of the nearest-neighbor and nearest-hyperrectangle algorithms , 1995, Machine Learning.

[15]  Rich Caruana,et al.  Greedy Attribute Selection , 1994, ICML.

[16]  David L. Waltz,et al.  Toward memory-based reasoning , 1986, CACM.

[17]  David W. Aha,et al.  Incremental, Instance-Based Learning of Independent and Graded Concept Descriptions , 1989, ML.

[18]  Claire Cardie,et al.  Using Decision Trees to Improve Case-Based Learning , 1993, ICML.

[19]  Peter E. Hart,et al.  Nearest neighbor pattern classification , 1967, IEEE Trans. Inf. Theory.

[20]  David B. Skalak,et al.  Prototype and Feature Selection by Sampling and Random Mutation Hill Climbing Algorithms , 1994, ICML.

[21]  Pat Langley,et al.  Average-Case Analysis of a Nearest Neighbor Algorithm , 1993, IJCAI.

[22]  Hidehiko Tanaka,et al.  An Optimal Weighting Criterion of Case Indexing for Both Numeric and Symbolic Attributes , 1994 .

[23]  Thomas M. Cover,et al.  Estimation by the nearest neighbor rule , 1968, IEEE Trans. Inf. Theory.

[24]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[25]  David W. Aha,et al.  Analyses of Instance-Based Learning Algorithms , 1991, AAAI.

[26]  Kohji Fukunaga,et al.  Introduction to Statistical Pattern Recognition-Second Edition , 1990 .

[27]  Walter Daelemans,et al.  Learnability and markedness in data-driven acquisition of stress , 1993 .

[28]  Belur V. Dasarathy,et al.  Nearest neighbor (NN) norms: NN pattern classification techniques , 1991 .