A Comparison between Fixed-Basis and Variable-Basis Schemes for Function Approximation and Functional Optimization

Fixed-basis and variable-basis approximation schemes are compared for the problems of function approximation and functional optimization (also known as infinite programming). Classes of problems are investigated for which variable-basis schemes with sigmoidal computational units perform better than fixed-basis ones, in terms of the minimum number of computational units needed to achieve a desired error in function approximation or approximate optimization. Previously known bounds on the accuracy are extended, with better rates, to families of d-variable functions whose actual dependence is on a subset of d′≪d variables, where the indices of these d′ variables are not known a priori.

[1]  Allan Pinkus,et al.  Approximation theory of the MLP model in neural networks , 1999, Acta Numerica.

[2]  G. Gnecco,et al.  Suboptimal Solutions to Dynamic Optimization Problems via Approximations of the Policy Functions , 2010 .

[3]  Halbert White,et al.  Sup-norm approximation bounds for networks through probabilistic methods , 1995, IEEE Trans. Inf. Theory.

[4]  J. Daniel On the approximate minimization of functionals , 1969 .

[5]  M. Bóna A Walk Through Combinatorics: An Introduction to Enumeration and Graph Theory , 2006 .

[6]  T. Zolezzi Condition numbers and Ritz type methods in unconstrained optimization , 2007 .

[7]  M. Sanguineti,et al.  Approximating Networks and Extended Ritz Method for the Solution of Functional Optimization Problems , 2002 .

[8]  A. Dontchev Perturbations, Approximations, and Sensitivity Analysis of Optimal Control Systems , 1983 .

[9]  Andrew R. Barron,et al.  Universal approximation bounds for superpositions of a sigmoidal function , 1993, IEEE Trans. Inf. Theory.

[10]  Y. Makovoz Uniform Approximation by Neural Networks , 1998 .

[11]  O. SIAMJ.,et al.  Error Estimates for Approximate Optimization by the Extended Ritz Method , 2005, SIAM J. Optim..

[12]  Donald E. Knuth,et al.  Big Omicron and big Omega and big Theta , 1976, SIGA.

[13]  Tong Zhang,et al.  Sequential greedy approximation for certain convex optimization problems , 2003, IEEE Trans. Inf. Theory.

[14]  Giorgio Gnecco,et al.  Some comparisons of complexity in dictionary-based and linear computational models , 2011, Neural Networks.

[15]  A. Pinkus n-Widths in Approximation Theory , 1985 .

[16]  Marcello Sanguineti,et al.  Can dictionary-based computational models outperform the best linear ones? , 2011, Neural Networks.

[17]  G. Lewicki,et al.  Approximation by Superpositions of a Sigmoidal Function , 2003 .

[18]  Marcello Sanguineti,et al.  Suboptimal solutions to network team optimization problems , 2009 .

[19]  Marcello Sanguineti,et al.  Comparison of worst case errors in linear and neural network approximation , 2002, IEEE Trans. Inf. Theory.

[20]  Marcello Sanguineti,et al.  Approximation Schemes for Functional Optimization Problems , 2008 .

[21]  Isabelle Guyon,et al.  An Introduction to Variable and Feature Selection , 2003, J. Mach. Learn. Res..

[22]  I. Singer Best Approximation in Normed Linear Spaces by Elements of Linear Subspaces , 1970 .

[23]  R. Zoppoli,et al.  Learning Techniques and Neural Networks for the Solution of N-Stage Nonlinear Nonquadratic Optimal Control Problems , 1992 .