This report proposes a new two-stage optimization method for robust Model Predictive Control (RMPC) with Gaussian disturbance and state estimation error. Since the disturbance is unbounded, it is impossible to achieve zero probability of violating constraints. Our goal is to optimize the expected value of a objective function while limiting the probability of violating any constraints over the planning horizon (joint chance constraint). Prior arts include constraint tightening with ellipsoidal relaxation [8] and Particle Control [1], but the former yields very conservative result and the latter is computationally intensive. Our new approach divide the optimization problem into two stages; the upper-stage that optimizes risk allocation, and the lower-stage that optimizes control sequence with tightened constraints. The lower-stage is a regular convex optimization, such as Linear Programming or Quadratic Programming. The upper-stage is also a convex optimization under practical assumption, but the objective function is not always differentiable, and computation of its gradient or subgradient is expensive for large-scale problem. A necessary condition for optimality, which does not explicitly use gradient and hence easy to compute, is discussed. A descent algorithm for the upper-stage called Iterative Risk Allocation (IRA), which does not require the computation of gradient, is proposed. Although the algorithm is not guaranteed to converge to the optimal, empirical results show that it converges quickly to a point that is close to the optimal. Suboptimality is much smaller than ellipsoidal relaxation method while achieving a substantial speedup compared to Particle Control.
[1]
J. Löfberg.
Minimax approaches to robust model predictive control
,
2003
.
[2]
E. Kerrigan.
Robust Constraint Satisfaction: Invariant Sets and Predictive Control
,
2000
.
[3]
Aachen,et al.
Stochastic Inequality Constrained Closed-loop Model Predictive Control: With Application To Chemical Process Operation
,
2004
.
[4]
Manfred Morari,et al.
Robust constrained model predictive control using linear matrix inequalities
,
1994,
Proceedings of 1994 American Control Conference - ACC '94.
[5]
Jun Yan,et al.
Incorporating state estimation into model predictive control and its application to network traffic control
,
2005,
Autom..
[6]
Masahiro Ono,et al.
An Efficient Motion Planning Algorithm for Stochastic Dynamic Systems with Constraints on Probability of Failure
,
2008,
AAAI.
[7]
A. Richards,et al.
Robust Receding Horizon Control using Generalized Constraint Tightening
,
2007,
2007 American Control Conference.
[8]
L. Blackmore,et al.
Optimal manipulator path planning with obstacles using disjunctive programming
,
2006,
2006 American Control Conference.
[9]
M. Kothare,et al.
Robust constrained model predictive control using linear matrix inequalities
,
1994,
Proceedings of 1994 American Control Conference - ACC '94.
[10]
L. Blackmore.
A Probabilistic Particle Control Approach to Optimal, Robust Predictive Control
,
2006
.