Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms

This paper analyses extensions of No-Free-Lunch (NFL) theorems to countably infinite and uncountable infinite domains and investigates the design of optimal optimization algorithms.The original NFL theorem due to Wolpert and Macready states that, for finite search domains, all search heuristics have the same performance when averaged over the uniform distribution over all possible functions. For infinite domains the extension of the concept of distribution over all possible functions involves measurability issues and stochastic process theory. For countably infinite domains, we prove that the natural extension of NFL theorems, for the current formalization of probability, does not hold, but that a weaker form of NFL does hold, by stating the existence of non-trivial distributions of fitness leading to equal performances for all search heuristics. Our main result is that for continuous domains, NFL does not hold. This free-lunch theorem is based on the formalization of the concept of random fitness functions by means of random fields.We also consider the design of optimal optimization algorithms for a given random field, in a black-box setting, namely, a complexity measure based solely on the number of requests to the fitness function. We derive an optimal algorithm based on Bellman’s decomposition principle, for a given number of iterates and a given distribution of fitness functions. We also approximate this algorithm thanks to a Monte-Carlo planning algorithm close to the UCT (Upper Confidence Trees) algorithm, and provide experimental results.

[1]  P. J. Daniell Integrals in An Infinite Number of Dimensions , 1919 .

[2]  A. N. Kolmogorov,et al.  Foundations of the theory of probability , 1960 .

[3]  J. Doob Stochastic process measurability conditions , 1975 .

[4]  Erik H. Vanmarcke,et al.  Random Fields: Analysis and Synthesis. , 1985 .

[5]  Patrick Billingsley,et al.  Probability and Measure. , 1986 .

[6]  Jack P. C. Kleijnen,et al.  Sensitivity analysis of simulation experiments: regression analysis and statistical design , 1992 .

[7]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[8]  Patrick D. Surry,et al.  Fundamental Limitations on Search Algorithms: Evolutionary Computing in Perspective , 1995, Computer Science Today.

[9]  Jan van Leeuwen,et al.  Computer Science Today , 1995, Lecture Notes in Computer Science.

[10]  Donald Geman,et al.  An Active Testing Model for Tracking Roads in Satellite Images , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  Natalia Alexandrov,et al.  Multidisciplinary design optimization : state of the art , 1997 .

[12]  Katya Scheinberg,et al.  Recent progress in unconstrained nonlinear optimization without derivatives , 1997, Math. Program..

[13]  David H. Wolpert,et al.  No free lunch theorems for optimization , 1997, IEEE Trans. Evol. Comput..

[14]  Joseph C. Culberson,et al.  On the Futility of Blind Search: An Algorithmic View of No Free Lunch , 1998, Evolutionary Computation.

[15]  Donald R. Jones,et al.  Efficient Global Optimization of Expensive Black-Box Functions , 1998, J. Glob. Optim..

[16]  A. J. Booker,et al.  A rigorous framework for optimization of expensive functions by surrogates , 1998 .

[17]  Thomas Jansen,et al.  Design and Management of Complex Technical Processes and Systems by Means of Computational Intelligence Methods Perhaps Not a Free Lunch but at Least a Free Appetizer Perhaps Not a Free Lunch but at Least a Free Appetizer , 2022 .

[18]  L. D. Whitley,et al.  The No Free Lunch and problem description length , 2001 .

[19]  Nikolaus Hansen,et al.  Completely Derandomized Self-Adaptation in Evolution Strategies , 2001, Evolutionary Computation.

[20]  Thomas Bäck,et al.  Metamodel-Assisted Evolution Strategies , 2002, PPSN.

[21]  Thomas Jansen,et al.  Optimization with randomized search heuristics - the (A)NFL theorem, realistic scenarios, and difficult functions , 2002, Theor. Comput. Sci..

[22]  Paul Jung,et al.  No free lunch. , 2002, Health affairs.

[23]  Joshua D. Knowles,et al.  Some multiobjective optimizers are better than others , 2003, The 2003 Congress on Evolutionary Computation, 2003. CEC '03..

[24]  Thomas M. Lavelle,et al.  Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft , 2003 .

[25]  Marc Toussaint,et al.  A No-Free-Lunch theorem for non-uniform distributions of target functions , 2004, J. Math. Model. Algorithms.

[26]  Andy J. Keane,et al.  A Derivative Based Surrogate Model for Approximating and Optimizing the Output of an Expensive Computer Simulation , 2004, J. Glob. Optim..

[27]  Milagros Van Grieken Optimisation pour l'apprentissage et apprentissage pour l'optimisation , 2004 .

[28]  Sean R Eddy,et al.  What is dynamic programming? , 2004, Nature Biotechnology.

[29]  Bernhard Sendhoff,et al.  Structure optimization of neural networks for evolutionary design optimization , 2005, Soft Comput..

[30]  Olivier Teytaud,et al.  Local and global order 3/2 convergence of a surrogate evolutionary algorithm , 2005, GECCO '05.

[31]  Andy J. Keane,et al.  Computational Approaches for Aerospace Design: The Pursuit of Excellence , 2005 .

[32]  Christian Gagné,et al.  Resource-Aware Parameterizations of EDA , 2006, 2006 IEEE International Conference on Evolutionary Computation.

[33]  Thomas Philip Runarsson Ordinal Regression in Evolutionary Computation , 2006, PPSN.

[34]  Petros Koumoutsakos,et al.  Local Meta-models for Optimization Using Evolution Strategies , 2006, PPSN.

[35]  Jesse Hoey,et al.  An analytic solution to discrete Bayesian reinforcement learning , 2006, ICML.

[36]  Csaba Szepesvári,et al.  Bandit Based Monte-Carlo Planning , 2006, ECML.

[37]  David Silver,et al.  Combining online and offline knowledge in UCT , 2007, ICML '07.

[38]  J. Dennis,et al.  MANAGING APPROXIMATION MODELS IN OPTIMIZATION , 2007 .

[39]  Olivier Teytaud,et al.  Comparison-Based Algorithms Are Robust and Randomized Algorithms Are Anytime , 2007, Evolutionary Computation.

[40]  Sylvain Gelly,et al.  Modifications of UCT and sequence-like simulations for Monte-Carlo Go , 2007, 2007 IEEE Symposium on Computational Intelligence and Games.

[41]  Olivier Teytaud,et al.  On the Parallelization of Monte-Carlo planning , 2008, ICINCO 2008.

[42]  J. Parojčić,et al.  Artificial neural networks in the modeling and optimization of aspirin extended release tablets with eudragit L 100 as matrix substance , 2008, AAPS PharmSciTech.

[43]  Eric Walter,et al.  An informational approach to the global optimization of expensive-to-evaluate functions , 2006, J. Glob. Optim..