BAYESIAN SIMULATION OPTIMIZATION WITH COMMON RANDOM NUMBERS

We consider the problem of stochastic simulation optimization with common random numbers over a numerical search domain. We propose the Knowledge Gradient for Common Random Numbers (KG-CRN) sequential sampling algorithm, a simple elegant modification to the Knowledge Gradient that incorporates the use of correlated noise in simulation outputs with Gaussian Process meta-models. We compare this method against the standard Knowledge Gradient and a more recently proposed variation that allows for pairwise sampling. Our method significantly outperforms both baselines under identical laboratory conditions while greatly reducing computational cost compared to pairwise sampling.

[1]  Harold J. Kushner,et al.  A New Method of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise , 1964 .

[2]  B. Nelson,et al.  Using common random numbers for indifference-zone selection and multiple comparisons in simulation , 1995 .

[3]  S. Gupta,et al.  Bayesian look ahead one-stage sampling allocations for selection of the best population , 1996 .

[4]  Donald R. Jones,et al.  Efficient Global Optimization of Expensive Black-Box Functions , 1998, J. Glob. Optim..

[5]  Stephen E. Chick,et al.  New Two-Stage and Sequential Procedures for Selecting the Best Simulated System , 2001, Oper. Res..

[6]  Thomas Stützle,et al.  A Racing Algorithm for Configuring Metaheuristics , 2002, GECCO.

[7]  Chun-Hung Chen,et al.  Optimal computing budget allocation under correlated sampling , 2004, Proceedings of the 2004 Winter Simulation Conference, 2004..

[8]  Jürgen Branke,et al.  Selecting a Selection Procedure , 2007, Manag. Sci..

[9]  Warren B. Powell,et al.  A Knowledge-Gradient Policy for Sequential Information Collection , 2008, SIAM J. Control. Optim..

[10]  Warren B. Powell,et al.  The Knowledge-Gradient Policy for Correlated Normal Beliefs , 2009, INFORMS J. Comput..

[11]  Andreas Krause,et al.  Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting , 2009, IEEE Transactions on Information Theory.

[12]  Thomas Stützle,et al.  F-Race and Iterated F-Race: An Overview , 2010, Experimental Methods for the Analysis of Optimization Algorithms.

[13]  D. Ginsbourger,et al.  Kriging is well-suited to parallelize optimization , 2010 .

[14]  Loo Hay Lee,et al.  Stochastic Simulation Optimization - An Optimal Computing Budget Allocation , 2010, System Engineering and Operations Research.

[15]  Sonja Kuhnt,et al.  Design and analysis of computer experiments , 2010 .

[16]  Andreas Krause,et al.  Contextual Gaussian Process Bandit Optimization , 2011, NIPS.

[17]  Philipp Hennig,et al.  Entropy Search for Information-Efficient Global Optimization , 2011, J. Mach. Learn. Res..

[18]  Xi Chen,et al.  The effects of common random numbers on stochastic kriging metamodels , 2012, TOMC.

[19]  Kate Smith-Miles,et al.  Towards objective measures of algorithm performance across instance space , 2014, Comput. Oper. Res..

[20]  Peter I. Frazier,et al.  The Parallel Knowledge Gradient Method for Batch Bayesian Optimization , 2016, NIPS.

[21]  Peter I. Frazier,et al.  Bayesian Optimization via Simulation with Pairwise Sampling and Correlated Prior Beliefs , 2016, Oper. Res..

[22]  Matthias Poloczek,et al.  Bayesian Optimization with Gradients , 2017, NIPS.

[23]  Michael Kolonko,et al.  Ranking and Selection , 2014, ACM Trans. Model. Comput. Simul..