Learning Bounds for Kernel Regression Using Effective Data Dimensionality
暂无分享,去创建一个
[1] S. Geer. Empirical Processes in M-Estimation , 2000 .
[2] Felipe Cucker,et al. On the mathematical foundations of learning , 2001 .
[3] Tong Zhang,et al. Covering Number Bounds of Certain Regularized Linear Function Classes , 2002, J. Mach. Learn. Res..
[4] Shahar Mendelson,et al. On the Performance of Kernel Classes , 2003, J. Mach. Learn. Res..
[5] Tong Zhang,et al. Leave-One-Out Bounds for Kernel Methods , 2003, Neural Computation.
[6] C. J. Stone,et al. Optimal Global Rates of Convergence for Nonparametric Regression , 1982 .
[7] Bernhard Schölkopf,et al. Generalization Performance of Regularization Networks and Support Vector Machines via Entropy Numbers of Compact Operators , 1998 .
[8] Grace Wahba,et al. Spline Models for Observational Data , 1990 .
[9] V. Yurinsky. Sums and Gaussian Vectors , 1995 .
[10] Shahar Mendelson,et al. Improving the sample complexity using global data , 2002, IEEE Trans. Inf. Theory.
[11] John Shawe-Taylor,et al. Covering numbers for support vector machines , 1999, COLT '99.
[12] Peter L. Bartlett,et al. The importance of convexity in learning with squared loss , 1998, COLT '96.
[13] Steven A. Orszag,et al. CBMS-NSF REGIONAL CONFERENCE SERIES IN APPLIED MATHEMATICS , 1978 .
[14] Peter L. Bartlett,et al. Localized Rademacher Complexities , 2002, COLT.
[15] Tong Zhang,et al. Effective Dimension and Generalization of Kernel Learning , 2002, NIPS.
[16] S. R. Jammalamadaka,et al. Empirical Processes in M-Estimation , 2001 .