Fast, smooth and adaptive regression in metric spaces

It was recently shown that certain nonparametric regressors can escape the curse of dimensionality when the intrinsic dimension of data is low ([1, 2]). We prove some stronger results in more general settings. In particular, we consider a regressor which, by combining aspects of both tree-based regression and kernel regression, adapts to intrinsic dimension, operates on general metrics, yields a smooth function, and evaluates in time O(log n). We derive a tight convergence rate of the form n-2/(2+d) where d is the Assouad dimension of the input space.

[1]  Alexander G. Gray,et al.  Faster Gaussian Summation: Theory and Experiment , 2006, UAI.

[2]  P. Bickel,et al.  Local polynomial regression on unknown manifolds , 2007, 0708.0983.

[3]  Stefan Schaal,et al.  Memory-based robot learning , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[4]  Mikhail Belkin,et al.  Laplacian Eigenmaps for Dimensionality Reduction and Data Representation , 2003, Neural Computation.

[5]  K. Clarkson Nearest-Neighbor Searching and Metric Space Dimensions , 2005 .

[6]  Sanjeev R. Kulkarni,et al.  Rates of convergence of nearest neighbor estimation under arbitrary sampling , 1995, IEEE Trans. Inf. Theory.

[7]  Alexander G. Gray,et al.  Fast High-dimensional Kernel Summations Using the Monte Carlo Multipole Method , 2008, NIPS.

[8]  Sanjoy Dasgupta,et al.  Random projection trees and low dimensional manifolds , 2008, STOC.

[9]  J. Tenenbaum,et al.  A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.

[10]  John Langford,et al.  Cover trees for nearest neighbor , 2006, ICML.

[11]  S T Roweis,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.

[12]  C. J. Stone,et al.  Optimal Rates of Convergence for Nonparametric Estimators , 1980 .

[13]  Robert Krauthgamer,et al.  Navigating nets: simple algorithms for proximity search , 2004, SODA '04.

[14]  Samory Kpotufe,et al.  Escaping the Curse of Dimensionality with a Tree-based Regressor , 2009, COLT.

[15]  Adam Krzyzak,et al.  A Distribution-Free Theory of Nonparametric Regression , 2002, Springer series in statistics.

[16]  Andrew W. Moore,et al.  Locally Weighted Learning , 1997, Artificial Intelligence Review.

[17]  C. J. Stone,et al.  OPTIMAL GLOBAL RATES OF CONVERGENCE FOR NONPARAMETRIC ESTIMATORS , 1982 .