Two-stage Optimization Approach to Robust Model Predictive Control with a Joint Chance Constraint

This report proposes a new two-stage optimization method for robust Model Predictive Control (RMPC) with Gaussian disturbance and state estimation error. Since the disturbance is unbounded, it is impossible to achieve zero probability of violating constraints. Our goal is to optimize the expected value of a objective function while limiting the probability of violating any constraints over the planning horizon (joint chance constraint). Prior arts include constraint tightening with ellipsoidal relaxation [8] and Particle Control [1], but the former yields very conservative result and the latter is computationally intensive. Our new approach divide the optimization problem into two stages; the upper-stage that optimizes risk allocation, and the lower-stage that optimizes control sequence with tightened constraints. The lower-stage is a regular convex optimization, such as Linear Programming or Quadratic Programming. The upper-stage is also a convex optimization under practical assumption, but the objective function is not always differentiable, and computation of its gradient or subgradient is expensive for large-scale problem. A necessary condition for optimality, which does not explicitly use gradient and hence easy to compute, is discussed. A descent algorithm for the upper-stage called Iterative Risk Allocation (IRA), which does not require the computation of gradient, is proposed. Although the algorithm is not guaranteed to converge to the optimal, empirical results show that it converges quickly to a point that is close to the optimal. Suboptimality is much smaller than ellipsoidal relaxation method while achieving a substantial speedup compared to Particle Control.