No free lunch theorems for optimization

A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori "head-to-head" minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems' enforcing of a type of uniformity over all algorithms.

[1]  E. L. Lawler,et al.  Branch-and-Bound Methods: A Survey , 1966, Oper. Res..

[2]  Lawrence J. Fogel,et al.  Artificial Intelligence through Simulated Evolution , 1966 .

[3]  L. Goddard,et al.  Operations Research (OR) , 2007 .

[4]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[5]  D. Griffeath,et al.  Introduction to Random Fields , 2020, 2007.09660.

[6]  J. Laurie Snell,et al.  Markov Random Fields and Their Applications , 1980 .

[7]  C. D. Gelatt,et al.  Optimization by Simulated Annealing , 1983, Science.

[8]  Fred Glover,et al.  Tabu Search - Part II , 1989, INFORMS J. Comput..

[9]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[10]  D. R. Wolf,et al.  Alpha, Evidence, and the Entropic Prior , 1993 .

[11]  Hans-Paul Schwefel,et al.  Evolution and optimum seeking , 1995, Sixth-generation computer technology series.

[12]  D. Wolpert,et al.  No Free Lunch Theorems for Search , 1995 .

[13]  David H. Wolpert,et al.  The Existence of A Priori Distinctions Between Learning Algorithms , 1996, Neural Computation.

[14]  David H. Wolpert,et al.  What makes an optimization problem hard? , 1995, Complex..

[15]  David H. Wolpert,et al.  The Lack of A Priori Distinctions Between Learning Algorithms , 1996, Neural Computation.

[16]  David H. Wolpert,et al.  On Bias Plus Variance , 1997, Neural Computation.

[17]  F. Yong,et al.  Alpha + + , 1999 .