Generalized Inverse Optimization through Online Learning

Inverse optimization is a powerful paradigm for learning preferences and restrictions that explain the behavior of a decision maker, based on a set of external signal and the corresponding decision pairs. However, most inverse optimization algorithms are designed specifically in batch setting, where all the data is available in advance. As a consequence, there has been rare use of these methods in an online setting suitable for real-time applications. In this paper, we propose a general framework for inverse optimization through online learning. Specifically, we develop an online learning algorithm that uses an implicit update rule which can handle noisy data. Moreover, under additional regularity assumptions in terms of the data and the model, we prove that our algorithm converges at a rate of $\mathcal{O}(1/\sqrt{T})$ and is statistically consistent. In our experiments, we show the online learning approach can learn the parameters with great accuracy and is very robust to noises, and achieves a dramatic improvement in computational efficacy over the batch learning approach.

[1]  Timothy C. Y. Chan,et al.  Inverse Optimization: Closed-Form Solutions, Geometry, and Goodness of Fit , 2015, Manag. Sci..

[2]  Stephan Dempe,et al.  Inverse Linear Programming , 2006 .

[3]  Zuo-Jun Max Shen,et al.  Inverse Optimization with Noisy Data , 2015, Oper. Res..

[4]  Nathan Srebro,et al.  Memory and Communication Efficient Distributed Stochastic Optimization with Minibatch Prox , 2017, COLT.

[5]  Lizhi Wang,et al.  Cutting plane algorithms for the inverse mixed integer linear programming problem , 2009, Oper. Res. Lett..

[6]  Stephen P. Boyd,et al.  Imputing a convex objective function , 2011, 2011 IEEE International Symposium on Intelligent Control.

[7]  Hai Yang,et al.  Estimation of origin-destination matrices from link traffic counts on congested networks , 1992 .

[8]  Timothy C. Y. Chan,et al.  Generalized Inverse Multiobjective Optimization with Application to Cancer Therapy , 2014, Oper. Res..

[9]  Charu C. Aggarwal,et al.  Recommender Systems: The Textbook , 2016 .

[10]  Alan Edelman,et al.  Julia: A Fresh Approach to Numerical Computing , 2014, SIAM Rev..

[11]  Bo Zeng,et al.  Inferring Parameters Through Inverse Multiobjective Optimization , 2018, ArXiv.

[12]  Yoram Singer,et al.  Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..

[13]  Garud Iyengar,et al.  Inverse conic programming with applications , 2005, Oper. Res. Lett..

[14]  Benjamin Pfaff,et al.  Perturbation Analysis Of Optimization Problems , 2016 .

[15]  Sebastian Pokutta,et al.  Emulating the Expert: Inverse Optimization through Online Learning , 2017, ICML.

[16]  Daniel Kuhn,et al.  Data-driven inverse optimization with imperfect information , 2015, Mathematical Programming.

[17]  Vishal Gupta,et al.  Data-driven estimation in equilibrium using inverse optimization , 2013, Mathematical Programming.

[18]  Ralph Roskies,et al.  Bridges: a uniquely flexible HPC resource for new communities and data analytics , 2015, XSEDE.

[19]  Vishal Gupta,et al.  Inverse Optimization: A New Perspective on the Black-Litterman Model , 2012, Oper. Res..

[20]  Peter L. Bartlett,et al.  Implicit Online Learning , 2010, ICML.

[21]  Alexander Shapiro,et al.  Optimization Problems with Perturbations: A Guided Tour , 1998, SIAM Rev..

[22]  Andrew J. Schaefer,et al.  Inverse integer programming , 2009, Optim. Lett..

[23]  Dale Schuurmans,et al.  implicit Online Learning with Kernels , 2006, NIPS.