暂无分享,去创建一个
[1] Ding-Xuan Zhou,et al. Learning Theory: An Approximation Theory Viewpoint , 2007 .
[2] Mario Ullrich,et al. Function values are enough for L2-approximation: Part II , 2020, J. Complex..
[3] A. Buchholz. Optimal Constants in Khintchine Type Inequalities for Fermions, Rademachers and q-Gaussian Operators , 2005 .
[4] M. Rudelson. Random Vectors in the Isotropic Position , 1996, math/9608208.
[5] V. N. Temlyakov,et al. The Marcinkiewicz-Type Discretization Theorems , 2017, Constructive Approximation.
[6] S. Mahadevan,et al. Learning Theory , 2001 .
[7] A. Cohen,et al. Optimal weighted least-squares methods , 2016, 1608.00512.
[8] M. Talagrand,et al. Probability in Banach spaces , 1991 .
[9] Karlheinz Gröchenig,et al. Sampling, Marcinkiewicz-Zygmund Inequalities, Approximation, and Quadrature Rules , 2019, J. Approx. Theory.
[10] Joel A. Tropp,et al. User-Friendly Tail Bounds for Sums of Random Matrices , 2010, Found. Comput. Math..
[11] S. Mendelson,et al. On singular values of matrices with independent rows , 2006 .
[12] Henryk Wozniakowski,et al. On the Power of Standard Information for Weighted Approximation , 2001, Found. Comput. Math..
[13] Vladimir N. Temlyakov,et al. On optimal recovery in L2 , 2021, J. Complex..
[14] E. Novak,et al. Tractability of Multivariate Problems, Volume III: Standard Information for Operators. , 2012 .
[15] Andreas Christmann,et al. Support vector machines , 2008, Data Mining and Knowledge Discovery Handbook.
[16] S. Dirksen,et al. Noncommutative and vector-valued Rosenthal inequalities , 2011 .
[17] A. Berlinet,et al. Reproducing kernel Hilbert spaces in probability and statistics , 2004 .
[18] Vladimir N. Temlyakov,et al. Sampling discretization error of integral norms for function classes , 2019, J. Complex..
[19] R. Oliveira. Sums of random Hermitian matrices and an inequality by Rudelson , 2010, 1004.3821.
[20] O. Bousquet,et al. Kernels, Associated Structures and Generalizations , 2004 .
[21] Toni Volkmer,et al. Worst case recovery guarantees for least squares approximation using random samples , 2019, ArXiv.
[22] Mario Ullrich. On the worst-case error of least squares algorithms for L2-approximation with high probability , 2020, J. Complex..
[23] Grzegorz W. Wasilkowski. Some nonlinear problems are as easy as the approximation problem , 1984 .
[24] Ingo Steinwart,et al. Mercer’s Theorem on General Domains: On the Interaction between Measures, Kernels, and RKHSs , 2012 .
[25] A. Buchholz. Operator Khintchine inequality in non-commutative probability , 2001 .
[26] Vladimir Temlyakov,et al. The Entropy in Learning Theory. Error Estimates , 2007 .
[27] M. Ruiz Espejo. Sampling , 2013, Encyclopedic Dictionary of Archaeology.
[28] Felipe Cucker,et al. Learning Theory: An Approximation Theory Viewpoint: Index , 2007 .
[29] Holger Rauhut,et al. Compressive Sensing with structured random matrices , 2012 .
[30] H. Rauhut. Compressive Sensing and Structured Random Matrices , 2009 .
[31] Michael A. Saunders,et al. LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares , 1982, TOMS.