Estimation by the nearest neighbor rule

Let R^{\ast} denote the Bayes risk (minimum expected loss) for the problem of estimating \theta \varepsilon \Theta , given an observed random variable x , joint probability distribution F(x,\theta) , and loss function L . Consider the problem in which the only knowledge of F is that which can be inferred from samples (x_{1},\theta_{1}),(x_{2},\theta_{2}), \cdots ,(x_{n}, \theta_{n}) , where the (x_{i}, \theta_{i})'s are independently identically distributed according to F . Let the nearest neighbor estimate of the parameter \theta associated with an observation x be defined to be the parameter \theta_{n}^{'} associated with the nearest neighbor x_{n}^{'} to x . Let R be the large sample risk of the nearest neighbor rule. It will be shown, for a wide range of probability distributions, that R \leq 2R^{\ast} for metric loss functions and R = 2R^{\ast} for squared-error loss functions. A simple estimator using the nearest k neighbors yields R = R^{\ast} (1 + 1/k) in the squared-error loss case. In this sense, it can be said that at least haft the information in the infinite training set is contained in the nearest neighbor. This paper is an extension of earlier work[q from the problem of classification by the nearest neighbor rule to that of estimation. However, the unbounded loss functions in the estimation problem introduce additional problems concerning the convergence of the unconditional risk. Thus some work is devoted to the investigation of natural conditions on the underlying distribution assuring the desired convergence.

[1]  J. Gordon,et al.  Quantum Effects in Communications Systems , 1962, Proceedings of the IRE.

[2]  Peter E. Hart,et al.  Nearest neighbor pattern classification , 1967, IEEE Trans. Inf. Theory.