Complexity of approximation of functions of few variables in high dimensions

In DeVore et al. (2011) [7] we considered smooth functions on [0,1]^N which depend on a much smaller number of variables @? or continuous functions which can be approximated by such functions. We were interested in approximating those functions when we can calculate point values at points of our choice. The number of points we needed for non-adaptive algorithms was higher than that in the adaptive case. In this paper we improve on DeVore et al. (2011) [7] and show that in the non-adaptive case one can use the same number of points (up to a multiplicative constant depending on @?) that we need in the adaptive case.

[1]  David L Donoho,et al.  Compressed sensing , 2006, IEEE Transactions on Information Theory.

[2]  Luisa Gargano,et al.  Sperner capacities , 1993, Graphs Comb..

[3]  Henryk Wozniakowski,et al.  Finite-order weights imply tractability of linear multivariate problems , 2004, J. Approx. Theory.

[4]  Jan Vybíral,et al.  Compressed learning of high-dimensional sparse functions , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[5]  R. Coifman,et al.  Diffusion Wavelets , 2004 .

[6]  C. D. Boor,et al.  Quasiinterpolants and Approximation Power of Multivariate Splines , 1990 .

[7]  Christoph Schwab,et al.  Convergence rates for sparse chaos approximations of elliptic problems with stochastic coefficients , 2007 .

[8]  QUASI-INTERPOLATION ON COMPACT DOMAINS , 1995 .

[9]  R. DeVore,et al.  Compressed sensing and best k-term approximation , 2008 .

[10]  Erich Novak,et al.  On the Power of Adaption , 1996, J. Complex..

[11]  Mikhail Belkin,et al.  Laplacian Eigenmaps for Dimensionality Reduction and Data Representation , 2003, Neural Computation.

[12]  J. Komlos,et al.  On the Size of Separating Systems and Families of Perfect Hash Functions , 1984 .

[13]  Ryan O'Donnell,et al.  Learning juntas , 2003, STOC '03.

[14]  E. Greenshtein Best subset selection, persistence in high-dimensional statistical learning and optimization under l1 constraint , 2006, math/0702684.

[15]  Joel H. Spencer,et al.  Families of k-independent sets , 1973, Discret. Math..

[16]  Aranyak Mehta,et al.  Learning symmetric k-juntas in time n , 2005 .

[17]  I. Daubechies,et al.  Capturing Ridge Functions in High Dimensions from Point Queries , 2012 .

[18]  E. Novak,et al.  Tractability of Multivariate Problems , 2008 .

[19]  Shmuel Zaks,et al.  On sets of Boolean n-vectors with all k-projections surjective , 2004, Acta Informatica.

[20]  János Körner,et al.  New Bounds for Perfect Hashing via Information Theory , 1988, Eur. J. Comb..

[21]  Jan Vybíral,et al.  Learning Functions of Few Arbitrary Linear Parameters in High Dimensions , 2010, Found. Comput. Math..

[22]  R. DeVore,et al.  Approximation of Functions of Few Variables in High Dimensions , 2011 .

[23]  E. Candès,et al.  Stable signal recovery from incomplete and inaccurate measurements , 2005, math/0503066.