AE: A domain-agnostic platform for adaptive experimentation

We describe AE, a machine learning platform for adaptive experimentation (e.g., Bayesian optimization, bandit optimization) that automates the process of sequential experimentation. Unlike existing solutions that are oriented primarily towards optimizing ML hyperparameters and simulations, AE is designed with online experimentation (A/B tests) in mind. Motivated by real-world examples from Facebook, we present a design for ML-assisted experimentation with multiple objectives, noisy, non-stationary measurements, and data from multiple experimentation modalities.

[1]  Art B. Owen,et al.  Scrambling Sobol' and Niederreiter-Xing Points , 1998, J. Complex..

[2]  Andreas Krause,et al.  Contextual Gaussian Process Bandit Optimization , 2011, NIPS.

[3]  Lihong Li,et al.  An Empirical Evaluation of Thompson Sampling , 2011, NIPS.

[4]  Jasper Snoek,et al.  Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.

[5]  Joaquin Quiñonero Candela,et al.  Counterfactual reasoning and learning systems: the example of computational advertising , 2013, J. Mach. Learn. Res..

[6]  Jasper Snoek,et al.  Multi-Task Bayesian Optimization , 2013, NIPS.

[7]  Ruben Martinez-Cantin,et al.  BayesOpt: a Bayesian optimization library for nonlinear optimization, experimental design and bandits , 2014, J. Mach. Learn. Res..

[8]  Michael S. Bernstein,et al.  Designing and deploying online field experiments , 2014, WWW.

[9]  Chris Eliasmith,et al.  Hyperopt: a Python library for model selection and hyperparameter optimization , 2015 .

[10]  Lihong Li,et al.  Toward Predicting the Outcome of an A/B Experiment for Search Relevance , 2015, WSDM.

[11]  Alex Deng,et al.  Data-Driven Metric Development for Online Controlled Experiments: Seven Lessons Learned , 2016, KDD.

[12]  D. Sculley,et al.  Google Vizier: A Service for Black-Box Optimization , 2017, KDD.

[13]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[14]  Frank Hutter,et al.  Maximizing acquisition functions for Bayesian optimization , 2018, NeurIPS.

[15]  Ion Stoica,et al.  Tune: A Research Platform for Distributed Model Selection and Training , 2018, ArXiv.

[16]  Andrew Gordon Wilson,et al.  GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration , 2018, NeurIPS.

[17]  Eytan Bakshy,et al.  Scalable Meta-Learning for Bayesian Optimization , 2018, ArXiv.

[18]  Peter I. Frazier,et al.  Continuous-fidelity Bayesian Optimization with Knowledge Gradient , 2018 .

[19]  Guilherme Ottoni,et al.  Constrained Bayesian Optimization with Noisy Experiments , 2017, Bayesian Analysis.

[20]  Eytan Bakshy,et al.  Bayesian Optimization for Policy Search via Online-Offline Experimentation , 2019, J. Mach. Learn. Res..