Algorithmic Parameter Optimization of the DFO Method with the OPAL Framework

We introduce the opal framework in which the identification of good algorithmic parameters is interpreted as a black box optimization problem whose variables are the algorithmic parameters. In addition to the target algorithm, the user of the framework must supply or select two components. The first is a set of metrics defining the notions of acceptable parameter values and of performance of the algorithm. The second is a collection of representative sets of valid input data for the target algorithm. opal may be applied to virtually any context in which parameter tuning leads to increased performance. The black box optimization problem is solved by way of a direct-search method that provides local optimality guarantees and offers a certain flexibility. We illustrate its use on a parameter-tuning application on the DFO method from the field of derivative-free optimization.

[1]  Webb Miller Software for Roundoff Analysis , 1975, TOMS.

[2]  Charles L. Lawson,et al.  Basic Linear Algebra Subprograms for Fortran Usage , 1979, TOMS.

[3]  Klaus Schittkowski,et al.  Test examples for nonlinear programming codes , 1980 .

[4]  Klaus Schittkowski,et al.  More test examples for nonlinear programming codes , 1981 .

[5]  F. Clarke Optimization And Nonsmooth Analysis , 1983 .

[6]  Jack J. Dongarra,et al.  Automatically Tuned Linear Algebra Software , 1998, Proceedings of the IEEE/ACM SC98 Conference.

[7]  James Demmel,et al.  The PHiPAC v1.0 Matrix-Multiply Distribution , 1998 .

[8]  Katya Scheinberg,et al.  A derivative free optimization algorithm in practice , 1998 .

[9]  Yuefan Deng,et al.  New trends in high performance computing , 2001, Parallel Computing.

[10]  J. Demmel,et al.  An updated set of basic linear algebra subprograms (BLAS) , 2002, TOMS.

[11]  Jorge J. Moré,et al.  Digital Object Identifier (DOI) 10.1007/s101070100263 , 2001 .

[12]  Tamara G. Kolda,et al.  Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods , 2003, SIAM Rev..

[13]  Nicholas I. M. Gould,et al.  CUTEr and SifDec: A constrained and unconstrained testing environment, revisited , 2003, TOMS.

[14]  Charles Audet,et al.  Finding Optimal Algorithmic Parameters Using the Mesh Adaptive Direct Search Algorithm , 2004 .

[15]  Ahmed H. Sameh,et al.  Algorithms for roundoff error analysis —A relative error approach , 1980, Computing.

[16]  Katherine Yelick,et al.  OSKI: A library of automatically tuned sparse matrix kernels , 2005 .

[17]  Nicholas I. M. Gould,et al.  Sensitivity of trust-region algorithms to their parameters , 2005, 4OR.

[18]  Lorenz T. Biegler,et al.  On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming , 2006, Math. Program..

[19]  Charles Audet,et al.  Mesh Adaptive Direct Search Algorithms for Constrained Optimization , 2006, SIAM J. Optim..

[20]  CHARLES AUDET,et al.  Finding Optimal Algorithmic Parameters Using Derivative-Free Optimization , 2006, SIAM J. Optim..

[21]  Manuel Laguna,et al.  Fine-Tuning of Algorithms Using Fractional Experimental Designs and Local Search , 2006, Oper. Res..

[22]  Thomas Stützle,et al.  Automatic Algorithm Configuration Based on Local Search , 2007, AAAI.

[23]  Jack J. Dongarra,et al.  A comparison of search heuristics for empirical code optimization , 2008, 2008 IEEE International Conference on Cluster Computing.

[24]  Charles Audet,et al.  A MADS Algorithm with a Progressive Barrier for Derivative-Free Nonlinear Programming , 2007 .

[25]  Chi-Bang Kuan,et al.  Automated Empirical Optimization , 2011, Encyclopedia of Parallel Computing.