Recent developments in stochastic MPC and sustainable development

Despite the extensive literature that exists on predictive control and robustness to uncertainty, both multiplicative (e.g. parametric) and additive (e.g. exogenous), very little attention has been paid to the case of stochastic uncertainty. Yet this arises naturally in many control applications, for example when models are identified using least squares procedures. More generally, stochastic uncertainty is a salient feature in other key areas of human endeavour, such as sustainable development. Sustainability refers to the strategy of encouraging development at current time without compromising the potential for development in the future. Inevitably, modelling the effects of sustainable development policy over a horizon of say 30 years involves a very significant random element, which has to be taken into account when assessing the optimality of any proposed policy. Model Predictive Control (MPC) is ideally suited for generating constrained optimal solutions and as such would be an ideal tool for policy assessment. However, this calls first for suitable extensions to the stochastic case. The aim of this paper is to review some of the recent advances in this area, and to provide a pilot study that demonstrates the efficacy of stochastic predictive control as a tool for assessing policy in a sustainable development problem concerning allocation of public research and development budgets between alternative power generation technologies. This problem has been considered in earlier work, but only in the context of a single-shot, open-loop optimisation. Similarly, the consideration of stochastic predictive control methodologies has previously been restricted to general hypothetical control problems. The current paper brings together this body of work, proposes suitable extensions, and concludes with a closed-loop study of predictive control applied to a sustainable development policy assessment problem.

[1]  O. Bosgra,et al.  A conic reformulation of Model Predictive Control including bounded and stochastic disturbances under state and input constraints , 2002, Proceedings of the 41st IEEE Conference on Decision and Control, 2002..

[2]  P. Whittle,et al.  Optimization under Constraints , 1975 .

[3]  C. R. Cutler,et al.  Dynamic matrix control¿A computer control algorithm , 1979 .

[4]  Gongsheng Huang,et al.  Modelling and optimisation for sustainable development policy assessment , 2005, Eur. J. Oper. Res..

[5]  D. Mayne,et al.  Min-max feedback model predictive control for constrained linear systems , 1998, IEEE Trans. Autom. Control..

[6]  Anton A. Stoorvogel,et al.  Stochastic disturbance rejection in model predictive control by randomized algorithms , 2001, Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148).

[7]  Stephen P. Boyd,et al.  Applications of second-order cone programming , 1998 .

[8]  Peter Whittle,et al.  Optimization Under Constraints: Theory and Applications of Nonlinear Programming. , 1972 .

[9]  Gongsheng Huang,et al.  MPC as a Tool for Sustainable Development Integrated Policy Assessment , 2003 .

[10]  O. Bosgra,et al.  Closed-loop stochastic dynamic process optimization under input and state constraints , 2002, Proceedings of the 2002 American Control Conference (IEEE Cat. No.CH37301).

[11]  E. Gilbert,et al.  Optimal infinite-horizon feedback laws for a general class of constrained discrete-time systems: Stability and moving-horizon approximations , 1988 .

[12]  B. Kouvaritakis,et al.  Statistical bounds on multivariable frequency response: an extension of the generalised Nyquist criterion , 1986 .

[13]  David Q. Mayne,et al.  Constrained model predictive control: Stability and optimality , 2000, Autom..