the future of experimental research

It is an open secret that the performance of algorithms depends on their parameterizations --- and of the parameterizations of the problem instances. However, these dependencies can be seen as means for understanding algorithm's behavior. Based on modern statistical techniques we demonstrate how to tune and understand algorithms. We present a comprehensive, effective and very efficient methodology for the design and experimental analysis of direct search techniques such as evolutionary algorithms, differential evolution, pattern search or even classical deterministic methods such as the Nelder-Mead simplex algorithm. Our approach extends the sequential parameter optimization (SPO) method that has been successfully applied as a tuning procedure to numerous heuristics for practical and theoretical optimization problems. Optimization practitioners receive valuable hints for choosing an adequate heuristic for their optimization problems---theoreticians receive guidelines for testing results systematically on real problem instances. Based on several examples from theory and practice we demonstrate how SPO improves the performance of many search heuristics significantly. However, this performance gain is not available for free. Therefore, costs of this tuning process are discussed as well as its limitations and a number of currently unresolved open issues in experimental research on algorithms.

[1]  R. Selten,et al.  Bounded rationality: The adaptive toolbox , 2000 .

[2]  Kenneth de Jong Parameter Setting in EAs: a 30 Year Perspective , 2007 .

[3]  S.J.J. Smith,et al.  Empirical Methods for Artificial Intelligence , 1995 .

[4]  Catherine C. McGeoch Experimental analysis of algorithms , 1986 .

[5]  Margaret J. Robertson,et al.  Design and Analysis of Experiments , 2006, Handbook of statistics.

[6]  Thomas J. Santner,et al.  The Design and Analysis of Computer Experiments , 2003, Springer Series in Statistics.

[7]  Jens Jägersküpper,et al.  Empirical Investigation of Simplified Step-Size Control in Metaheuristics with a View to Theory , 2008, WEA.

[8]  Stefan H. Thomke,et al.  Experimentation Matters: Unlocking the Potential of New Technologies for Innovation , 2003 .

[9]  Mike Preuss Adaptability of Algorithms for Real-Valued Optimization , 2009, EvoWorkshops.

[10]  Zbigniew Michalewicz,et al.  Parameter Setting in Evolutionary Algorithms , 2007, Studies in Computational Intelligence.

[11]  Giuseppe F. Italiano,et al.  What Do We Learn from Experimental Algorithmics? , 2000, MFCS.

[12]  Reha Uzsoy,et al.  Experimental Evaluation of Heuristic Optimization Algorithms: A Tutorial , 2001, J. Heuristics.

[13]  Thomas Bartz-Beielstein How experimental algorithmics can benefit from Mayo’s extensions to Neyman–Pearson theory of testing , 2007, Synthese.

[14]  Thomas Bartz-Beielstein,et al.  Experimental Research in Evolutionary Computation - The New Experimentalism , 2010, Natural Computing Series.

[15]  Patrick Suppes,et al.  Studies in the Methodology and Foundations of Science: Selected Papers from 1951 to 1969 , 1969 .

[16]  Raymond Ros,et al.  Real-Parameter Black-Box Optimization Benchmarking 2009: Experimental Setup , 2009 .

[17]  A. E. Eiben,et al.  Introduction to Evolutionary Computing , 2003, Natural Computing Series.

[18]  Thomas Bartz-Beielstein,et al.  Optimierte Modellierung von Füllständen in Regenüberlaufbecken mittels CI-basierter ParameterselektionOptimized Modelling of Fill Levels in Stormwater Tanks Using CI-based Parameter Selection Schemes , 2009, Autom..

[19]  John N. Hooker,et al.  Testing heuristics: We have it all wrong , 1995, J. Heuristics.

[20]  David S. Johnson,et al.  A theoretician's guide to the experimental analysis of algorithms , 1999, Data Structures, Near Neighbor Searches, and Methodology.

[21]  Hans-Georg Beyer,et al.  The Theory of Evolution Strategies , 2001, Natural Computing Series.

[22]  Nikolaus Hansen,et al.  Evaluating the CMA Evolution Strategy on Multimodal Test Functions , 2004, PPSN.

[23]  Ronald N. Giere,et al.  Using Models to Represent Reality , 1999 .

[24]  Anne Auger,et al.  Performance evaluation of an advanced local search evolutionary algorithm , 2005, 2005 IEEE Congress on Evolutionary Computation.

[25]  Deborah G. Mayo An objective theory of statistical testing , 2005, Synthese.

[26]  Mauricio G. C. Resende,et al.  Designing and reporting on computational experiments with heuristic methods , 1995, J. Heuristics.

[27]  R. A. Fisher,et al.  Design of Experiments , 1936 .

[28]  A. F. Chalmers,et al.  What Is This Thing Called Science , 1976 .

[29]  P. Suppes A comparison of the meaning and uses of models in mathematics and the empirical sciences , 1960, Synthese.

[30]  M. Kendall,et al.  The Logic of Scientific Discovery. , 1959 .

[31]  J. Kleijnen Statistical tools for simulation practitioners , 1986 .

[32]  Deborah G. Mayo,et al.  Error and the Growth of Experimental Knowledge , 1996 .

[33]  D. Mayo,et al.  Severe Testing as a Basic Concept in a Neyman–Pearson Philosophy of Induction , 2006, The British Journal for the Philosophy of Science.

[34]  Thomas Stützle,et al.  Stochastic Local Search: Foundations & Applications , 2004 .

[35]  Arnold L. Rosenberg,et al.  An empirical study of dynamic scheduling on rings of processors , 1996, Proceedings of SPDP '96: 8th IEEE Symposium on Parallel and Distributed Processing.

[36]  R. Ackermann The New Experimentalism* , 1989, The British Journal for the Philosophy of Science.