Learning and Intelligent Optimization

In this paper we consider the problem of estimating the relative performance of a given set of related algorithms. The predominant, general approach of doing so involves executing each algorithm instance multiple times, and computing independent estimates based on the performance observations made for each of them. A single execution might be expensive, making this a time-consuming process. We show how an algorithm in general can be viewed as a distribution over executions; and its performance as the expectation of some measure of desirability of an execution, over this distribution. Subsequently, we describe how Importance Sampling can be used to generalize performance observations across algorithms with partially overlapping distributions, amortizing the cost of obtaining them. Finally, we implement the proposed approach as a Proof of Concept and validate it experimentally.

[1]  Burr Settles,et al.  Active Learning Literature Survey , 2009 .

[2]  Andrei Broder,et al.  Network Applications of Bloom Filters: A Survey , 2004, Internet Math..

[3]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[4]  W. Oettli,et al.  From optimization and variational inequalities to equilibrium problems , 1994 .

[5]  Rainer Kolisch Serial and parallel resource-constrained project scheduling methods revisited: Theory and computation , 1994 .

[6]  Alexei Vernitski,et al.  A Way of Eliminating Errors When Using Bloom Filters for Routing in Computer Networks , 2016 .

[7]  E H Shorthffe,et al.  Computer-based medical consultations mycin , 1976 .

[8]  Yves Deville,et al.  DiceKriging, DiceOptim: Two R Packages for the Analysis of Computer Experiments by Kriging-Based Metamodeling and Optimization , 2012 .

[9]  Ralph D. Ellis,et al.  Comparing Schedule Generation Schemes in Resource-Constrained Project Scheduling Using Elitist Genetic Algorithm , 2010 .

[10]  Rainer Kolisch,et al.  Experimental investigation of heuristics for resource-constrained project scheduling: An update , 2006, Eur. J. Oper. Res..

[11]  Jasper Snoek,et al.  Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.

[12]  Leo Liberti,et al.  The Learnability of Business Rules , 2016, MOD.

[13]  R. Kolisch,et al.  Heuristic algorithms for the resource-constrained project scheduling problem: classification and computational analysis , 1999 .

[14]  Louis Cohen,et al.  Practical Statistics for Students , 1996 .

[15]  H. Engl,et al.  Regularization of Inverse Problems , 1996 .

[16]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[17]  W. Hsu,et al.  Algorithm selection for sorting and probabilistic inference: a machine learning-based approach , 2003 .

[18]  Rainer Kolisch,et al.  PSPLIB - A project scheduling problem library: OR Software - ORSEP Operations Research Software Exchange Program , 1997 .

[19]  Brad Calder,et al.  Online performance auditing: using hot optimizations without getting burned , 2006, PLDI '06.

[20]  Bernd Bischl,et al.  Faster Model-Based Optimization Through Resource-Aware Scheduling Strategies , 2016, LION.

[21]  Burton H. Bloom,et al.  Space/time trade-offs in hash coding with allowable errors , 1970, CACM.

[22]  Roman G. Strongin,et al.  Global optimization with non-convex constraints , 2000 .

[23]  Ronald G. Ross,et al.  Principles of the business rule approach: Ronald G. Ross, Addison-Wesley Information Technology Series, February 2003, 256pp., price £30.99, ISBN 0-201-78893-4 , 2004, Int. J. Inf. Manag..