RATE-OPTIMALITY OF THE COMPLETE EXPECTED IMPROVEMENT CRITERION

Expected improvement (EI) is a leading algorithmic approach to simulation-based optimization. However, it was recently proved that, in the context of ranking and selection, some of the most well-known EI-type methods cause the probability of incorrect selection to converge at suboptimal rates. We investigate a more recent variant of EI (known as “complete EI”) that was proposed by Salemi, Nelson, and Staum (2014), and summarize results showing that, with some minor modifications, complete EI can be made to achieve the optimal convergence rate in ranking and selection with independent Gaussian noise. This is the strongest theoretical guarantee available for any EI-type method.

[1]  H. Ruben A New Asymptotic Expansion for the Normal Probability Integral and Mill's Ratio , 1962 .

[2]  M. Degroot Optimal Statistical Decisions , 1970 .

[3]  Donald R. Jones,et al.  Efficient Global Optimization of Expensive Black-Box Functions , 1998, J. Glob. Optim..

[4]  Ricki G. Ingalls Introduction to simulation: introduction to simulation , 2002, WSC '02.

[5]  Peter W. Glynn,et al.  A large deviations perspective on ordinal optimization , 2004, Proceedings of the 2004 Winter Simulation Conference, 2004..

[6]  Barry L. Nelson,et al.  The tradeoff between sampling and switching: New sequential procedures for indifference-zone selection , 2005 .

[7]  Stephen E. Chick,et al.  Chapter 9 Subjective Probability and Bayesian Methodology , 2006, Simulation.

[8]  Jürgen Branke,et al.  Selecting a Selection Procedure , 2007, Manag. Sci..

[9]  Barry L. Nelson,et al.  A brief introduction to optimization via simulation , 2009, Proceedings of the 2009 Winter Simulation Conference (WSC).

[10]  Jürgen Branke,et al.  Sequential Sampling to Myopically Maximize the Expected Value of Information , 2010, INFORMS J. Comput..

[11]  Loo Hay Lee,et al.  Stochastic Simulation Optimization - An Optimal Computing Budget Allocation , 2010, System Engineering and Operations Research.

[12]  Ilya O. Ryzhov,et al.  Optimal learning with non-Gaussian rewards , 2013, 2013 Winter Simulations Conference (WSC).

[13]  Barry L. Nelson,et al.  Discrete optimization via simulation using Gaussian Markov random fields , 2014, Proceedings of the Winter Simulation Conference 2014.

[14]  Benjamin Van Roy,et al.  Learning to Optimize via Posterior Sampling , 2013, Math. Oper. Res..

[15]  Huashuai Qu,et al.  Simulation optimization: A tutorial overview and recent developments in gradient-based methods , 2014, Proceedings of the Winter Simulation Conference 2014.

[16]  Michael C. Fu,et al.  Handbook of Simulation Optimization , 2014 .

[17]  Barry L. Nelson,et al.  Discrete Optimization via Simulation , 2015 .

[18]  Boris Defourny,et al.  Optimal Learning in Linear Regression with Combinatorial Feature Selection , 2016, INFORMS J. Comput..

[19]  Daniel Russo,et al.  Simple Bayesian Algorithms for Best Arm Identification , 2016, COLT.

[20]  Diego Klabjan,et al.  Improving the Expected Improvement Algorithm , 2017, NIPS.

[21]  Michael C. Fu,et al.  Myopic Allocation Policy With Asymptotically Optimal Sampling Rate , 2017, IEEE Transactions on Automatic Control.

[22]  Ye Chen,et al.  Rate-optimality of the complete expected improvement criterion , 2017, 2017 Winter Simulation Conference (WSC).