Best Choices for Regularization Parameters in Learning Theory: On the Bias—Variance Problem

Abstract. No abstract.

[1]  P. Massart,et al.  Risk bounds for model selection via penalization , 1999 .

[2]  A. N. Tikhonov,et al.  Solutions of ill-posed problems , 1977 .

[3]  Federico Girosi,et al.  Generalization bounds for function approximation from scattered noisy data , 1999, Adv. Comput. Math..

[4]  Maurice Mignotte,et al.  Mathematics for computer algebra , 1991 .

[5]  A. Barron Approximation and Estimation Bounds for Artificial Neural Networks , 1991, COLT '91.

[6]  O. Lepskii,et al.  Asymptotically minimax adaptive estimation. II: Schemes without optimal adaptation: adaptive estimators , 1993 .

[7]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[8]  V. Ivanov,et al.  The Theory of Approximate Methods and Their Application to the Numerical Solution of Singular Integr , 1978 .

[9]  Tomaso A. Poggio,et al.  Regularization Networks and Support Vector Machines , 2000, Adv. Comput. Math..

[10]  O. Lepskii Asymptotically Minimax Adaptive Estimation. I: Upper Bounds. Optimally Adaptive Estimates , 1992 .

[11]  F. Girosi,et al.  On the Relationship between Generalization Error , Hypothesis NG 1879 Complexity , and Sample Complexity for Radial Basis Functions N 00014-92-J-1879 6 , 2022 .

[12]  Federico Girosi,et al.  On the Relationship between Generalization Error, Hypothesis Complexity, and Sample Complexity for Radial Basis Functions , 1996, Neural Computation.

[13]  Felipe Cucker,et al.  On the mathematical foundations of learning , 2001 .

[14]  S. Geer Empirical Processes in M-Estimation , 2000 .