Understanding Curriculum Learning in Policy Optimization for Online Combinatorial Optimization

Over the recent years, reinforcement learning (RL) starts to show promising results in tackling combinatorial optimization (CO) problems, in particular when coupled with curriculum learning to facilitate training. Despite emerging empirical evidence, theoretical study on why RL helps is still at its early stage. This paper presents the first systematic study on policy optimization methods for online CO problems. We show that online CO problems can be naturally formulated as latent Markov Decision Processes (LMDPs), and prove convergence bounds on natural policy gradient (NPG) for solving LMDPs. Furthermore, our theory explains the benefit of curriculum learning: it can find a strong sampling policy and reduce the distribution shift, a critical quantity that governs the convergence rate in our theorem. For a canonical online CO problem, Secretary Problem, we formally prove that distribution shift is reduced exponentially with curriculum learning even if the curriculum is randomly generated . Our theory also shows we can simplify the curriculum learning scheme used in prior work from multi-step to single-step. Lastly, we provide extensive experiments on Secretary Problem and Online Knapsack to verify our findings.

[1]  Elad Hazan,et al.  A Boosting Approach to Reinforcement Learning , 2021, NeurIPS.

[2]  R. Srikant,et al.  Linear Convergence of Entropy-Regularized Natural Policy Gradient with Linear Function Approximation , 2021, ArXiv.

[3]  Andrea Lodi,et al.  Combinatorial optimization and reasoning with graph neural networks , 2021, IJCAI.

[4]  Shie Mannor,et al.  RL for Latent MDPs: Regret Guarantees and a Lower Bound , 2021, NeurIPS.

[5]  Sham M. Kakade,et al.  On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift , 2019, J. Mach. Learn. Res..

[6]  Jens Kober,et al.  Deep Reinforcement Learning with Feedback-based Exploration , 2019, 2019 IEEE 58th Conference on Decision and Control (CDC).

[7]  Javier Ruiz-del-Solar,et al.  Interactive Learning with Corrective Feedback for Policies based on Deep Neural Networks , 2018, ISER.

[8]  Max Welling,et al.  Attention, Learn to Solve Routing Problems! , 2018, ICLR.

[9]  David L. Dill,et al.  Learning a SAT Solver from Single-Bit Supervision , 2018, ICLR.

[10]  Elias Boutros Khalil,et al.  Learning Combinatorial Optimization Algorithms over Graphs , 2017, NIPS.

[11]  Jason Weston,et al.  Curriculum learning , 2009, ICML '09.

[12]  Piotr Sankowski,et al.  Stochastic analyses for online combinatorial optimization problems , 2008, SODA '08.

[13]  Nicole Immorlica,et al.  A Knapsack Secretary Problem with Applications , 2007, APPROX-RANDOM.

[14]  Martin Grötschel,et al.  Combinatorial Online Optimization in Real Time , 2001 .

[15]  Zhiyi Huang,et al.  Online Combinatorial Optimization Problems with Non-linear Objectives , 2019, Nonlinear Combinatorial Optimization.

[16]  LEARNS OLD TRICKS,et al.  A new dog learns old tricks: RL finds classic optimization algorithms , 2018, ICLR.

[17]  M. Beckmann,et al.  Dynamic programming and the secretary problem , 1990 .