From Predictive to Prescriptive Analytics

In this paper, we combine ideas from machine learning (ML) and operations research and management science (OR/MS) in developing a framework, along with specific methods, for using data to prescribe optimal decisions in OR/MS problems. In a departure from other work on data-driven optimization and reflecting our practical experience with the data available in applications of OR/MS, we consider data consisting, not only of observations of quantities with direct effect on costs/revenues, such as demand or returns, but predominantly of observations of associated auxiliary quantities. The main problem of interest is a conditional stochastic optimization problem, given imperfect observations, where the joint probability distributions that specify the problem are unknown. We demonstrate that our proposed solution methods, which are inspired by ML methods such as local regression, CART, and random forests, are generally applicable to a wide range of decision problems. We prove that they are tractable and asymptotically optimal even when data is not iid and may be censored. We extend this to the case where decision variables may directly affect uncertainty in unknown ways, such as pricing's effect on demand. As an analogue to R^2, we develop a metric P termed the coefficient of prescriptiveness to measure the prescriptive content of data and the efficacy of a policy from an operations perspective. To demonstrate the power of our approach in a real-world setting we study an inventory management problem faced by the distribution arm of an international media conglomerate, which ships an average of 1bil units per year. We leverage internal data and public online data harvested from IMDb, Rotten Tomatoes, and Google to prescribe operational decisions that outperform baseline measures. Specifically, the data we collect, leveraged by our methods, accounts for an 88\% improvement as measured by our P.

[1]  E. L. Lehmann,et al.  Theory of point estimation , 1950 .

[2]  Vladimir Vapnik,et al.  Principles of Risk Minimization for Learning Theory , 1991, NIPS.

[3]  E. Kaplan,et al.  Nonparametric Estimation from Incomplete Observations , 1958 .

[4]  Eleftherios Mylonakis,et al.  Google trends: a web-based tool for real-time surveillance of disease outbreaks. , 2009, Clinical infectious diseases : an official publication of the Infectious Diseases Society of America.

[5]  E. Nadaraya On Estimating Regression , 1964 .

[6]  P. Doukhan Mixing: Properties and Examples , 1994 .

[7]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[8]  Dimitri P. Bertsekas,et al.  Nonlinear Programming , 1997 .

[9]  M. Talagrand,et al.  Probability in Banach Spaces: Isoperimetry and Processes , 1991 .

[10]  Jianqing Fan Local Linear Regression Smoothers and Their Minimax Efficiencies , 1993 .

[11]  P. Billingsley,et al.  Convergence of Probability Measures , 1969 .

[12]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[13]  Ambuj Tewari,et al.  On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization , 2008, NIPS.

[14]  G. Calafiore,et al.  On Distributionally Robust Chance-Constrained Linear Programs , 2006 .

[15]  Wei-Yin Loh,et al.  Classification and regression trees , 2011, WIREs Data Mining Knowl. Discov..

[16]  Xiaohong Chen,et al.  MIXING AND MOMENT PROPERTIES OF VARIOUS GARCH AND STOCHASTIC VOLATILITY MODELS , 2002, Econometric Theory.

[17]  George G. Roussas,et al.  Exact rates of almost sure convergence of a recursive kernel estimate of a probability densiy function: Application to regression and hazard rate estimation , 1992 .

[18]  Pierre Geurts,et al.  Extremely randomized trees , 2006, Machine Learning.

[19]  R. C. Bradley Basic Properties of Strong Mixing Conditions , 1985 .

[20]  Alexander Shapiro,et al.  The Sample Average Approximation Method for Stochastic Discrete Optimization , 2002, SIAM J. Optim..

[21]  A. Mokkadem Mixing properties of ARMA processes , 1988 .

[22]  Cosma Rohilla Shalizi,et al.  Estimating beta-mixing coefficients , 2011, AISTATS.

[23]  B. Hansen UNIFORM CONVERGENCE RATES FOR KERNEL ESTIMATION WITH DEPENDENT DATA , 2008, Econometric Theory.

[24]  Bernardo A. Huberman,et al.  Predicting the Future with Social Media , 2010, 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology.

[25]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[26]  Shuaian Wang,et al.  Sample Average Approximation , 2018 .

[27]  E. F. Schuster,et al.  On a universal strong law of large numbers for conditional expectations , 1998 .

[28]  Don R. Hush,et al.  An Explicit Description of the Reproducing Kernel Hilbert Spaces of Gaussian RBF Kernels , 2006, IEEE Transactions on Information Theory.

[29]  David M. Pennock,et al.  Predicting consumer behavior with Web search , 2010, Proceedings of the National Academy of Sciences.

[30]  Yinyu Ye,et al.  Distributionally Robust Optimization Under Moment Uncertainty with Application to Data-Driven Problems , 2010, Oper. Res..

[31]  H. Robbins Some aspects of the sequential design of experiments , 1952 .

[32]  Leo Breiman,et al.  Statistical Modeling: The Two Cultures (with comments and a rejoinder by the author) , 2001 .

[33]  Jean-Philippe Vial,et al.  Robust Optimization , 2021, ICORES.

[34]  Antonio Alonso Ayuso,et al.  Introduction to Stochastic Programming , 2009 .

[35]  Erik Brynjolfsson,et al.  Big data: the management revolution. , 2012, Harvard business review.

[36]  YeYinyu,et al.  Distributionally Robust Optimization Under Moment Uncertainty with Application to Data-Driven Problems , 2010 .

[37]  Nathan Kallus,et al.  Predicting crowd behavior with big public data , 2014, WWW.

[38]  Giuseppe Carlo Calafiore,et al.  Uncertain convex programs: randomized solutions and confidence levels , 2005, Math. Program..

[39]  Andrew Tomkins,et al.  How to build a WebFountain: An architecture for very large-scale text analytics , 2004, IBM Syst. J..

[40]  Abraham Wald,et al.  Statistical Decision Functions , 1951 .

[41]  R. C. Bradley Basic properties of strong mixing conditions. A survey and some open questions , 2005, math/0511078.

[42]  Cynthia Rudin,et al.  The Big Data Newsvendor: Practical Insights from Machine Learning , 2013, Oper. Res..

[43]  Naomi S. Altman,et al.  Quantile regression , 2019, Nature Methods.

[44]  Sunil Arya,et al.  An optimal algorithm for approximate nearest neighbor searching fixed dimensions , 1998, JACM.

[45]  R. M. Dudley,et al.  Real Analysis and Probability , 1989 .

[46]  Zhi Da,et al.  In Search of Attention , 2009 .

[47]  H. Varian,et al.  Predicting the Present with Google Trends , 2009 .

[48]  A. Cameron,et al.  Microeconometrics: Methods and Applications , 2005 .

[49]  Yi-Hao Kao,et al.  Directed Regression , 2009, NIPS.

[50]  Omar Besbes,et al.  Dynamic Pricing Without Knowing the Demand Function: Risk Bounds and Near-Optimal Algorithms , 2009, Oper. Res..

[51]  D. Rubin,et al.  Causal Inference for Statistics, Social, and Biomedical Sciences: A General Method for Estimating Sampling Variances for Standard Estimators for Average Causal Effects , 2015 .

[52]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[53]  Constantine Caramanis,et al.  Theory and Applications of Robust Optimization , 2010, SIAM Rev..

[54]  H. Müller,et al.  Kernel estimation of regression functions , 1979 .

[55]  James B. Orlin,et al.  Adaptive Data-Driven Inventory Control with Censored Demand Based on Kaplan-Meier Estimator , 2011, Oper. Res..

[56]  G. S. Watson,et al.  Smooth regression analysis , 1964 .

[57]  J. Berger Statistical Decision Theory and Bayesian Analysis , 1988 .

[58]  Vishal Gupta,et al.  Data-driven robust optimization , 2013, Math. Program..

[59]  Cynthia Rudin,et al.  The Big Data Newsvendor: Practical Insights from Machine Learning Analysis , 2013 .

[60]  J. H. Ward Hierarchical Grouping to Optimize an Objective Function , 1963 .

[61]  Alexander Shapiro,et al.  On Complexity of Stochastic Programming Problems , 2005 .

[62]  Alexander Shapiro,et al.  Stochastic Approximation approach to Stochastic Programming , 2013 .

[63]  A. Banerjee Convex Analysis and Optimization , 2006 .

[64]  Peter L. Bartlett,et al.  Rademacher and Gaussian Complexities: Risk Bounds and Structural Results , 2003, J. Mach. Learn. Res..

[65]  J. G. Pierce,et al.  Geometric Algorithms and Combinatorial Optimization , 2016 .

[66]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[67]  Jean-Loup Guillaume,et al.  Fast unfolding of communities in large networks , 2008, 0803.0476.

[68]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[69]  A. Berlinet,et al.  Reproducing kernel Hilbert spaces in probability and statistics , 2004 .

[70]  Ramanathan V. Guha,et al.  The predictive power of online chatter , 2005, KDD '05.

[71]  Singiresu S. Rao,et al.  Optimization Theory and Applications , 1980, IEEE Transactions on Systems, Man, and Cybernetics.

[72]  R. Rockafellar,et al.  Optimization of conditional value-at risk , 2000 .

[73]  A. Shapiro Monte Carlo Sampling Methods , 2003 .

[74]  T. Merkle,et al.  Strong Laws of Large Numbers and Nonparametric Estimation , 2010 .

[75]  T. L. Lai Andherbertrobbins Asymptotically Efficient Adaptive Allocation Rules , 1985 .

[76]  Daniel Kuhn,et al.  Robust Data-Driven Dynamic Programming , 2013, NIPS.

[77]  D. Rubin,et al.  The central role of the propensity score in observational studies for causal effects , 1983 .

[78]  M. Priestley,et al.  Non‐Parametric Function Fitting , 1972 .

[79]  N. Altman An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression , 1992 .

[80]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.

[81]  Mehryar Mohri,et al.  Rademacher Complexity Bounds for Non-I.I.D. Processes , 2008, NIPS.

[82]  A. Belloni,et al.  L1-Penalised quantile regression in high-dimensional sparse models , 2009 .

[83]  Bastian Goldlücke,et al.  Variational Analysis , 2014, Computer Vision, A Reference Guide.

[84]  Ameet Talwalkar,et al.  Foundations of Machine Learning , 2012, Adaptive computation and machine learning.

[85]  Dimitris Bertsimas,et al.  Pricing from Observational Data , 2016 .

[86]  Warren B. Powell,et al.  Nonparametric Density Estimation for Stochastic Optimization with an Observable State Variable , 2010, NIPS.

[87]  Tin Kam Ho,et al.  The Random Subspace Method for Constructing Decision Forests , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[88]  Jon Louis Bentley,et al.  Multidimensional binary search trees used for associative searching , 1975, CACM.

[89]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[90]  E. Parzen On Estimation of a Probability Density Function and Mode , 1962 .

[91]  Bogdan E. Popescu,et al.  Importance Sampled Learning Ensembles , 2003 .

[92]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[93]  L. Devroye,et al.  On the L1 convergence of kernel estimators of regression functions with applications in discrimination , 1980 .

[94]  W. Cleveland,et al.  Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting , 1988 .