Regularized and Distributionally Robust Data-Enabled Predictive Control

In this paper, we study a data-enabled predictive control (DeePC) algorithm applied to unknown stochastic linear time-invariant systems. The algorithm uses noise-corrupted input/output data to predict future trajectories and compute optimal control policies. To robustify against uncertainties in the input/output data, the control policies are computed to minimize a worst-case expectation of a given objective function. Using techniques from distributionally robust stochastic optimization, we prove that for certain objective functions, the worst-case optimization problem coincides with a regularized version of the DeePC algorithm. These results support the previously observed advantages of the regularized algorithm. We illustrate the robustness of the regularized algorithm through a numerical case study.

[1]  Lennart Ljung,et al.  System Identification: Theory for the User , 1987 .

[2]  J. Maciejowski,et al.  Soft constraints and exact penalty functions in model predictive control , 2000 .

[3]  Bart De Moor,et al.  A note on persistency of excitation , 2005, 2004 43rd IEEE Conference on Decision and Control (CDC) (IEEE Cat. No.04CH37601).

[4]  Håkan Hjalmarsson,et al.  From experiment design to closed-loop control , 2005, Autom..

[5]  S. Van Huffel,et al.  Exact and Approximate Modeling of Linear Systems: A Behavioral Approach , 2006 .

[6]  Paolo Rapisarda,et al.  Data-driven simulation and control , 2008, Int. J. Control.

[7]  Dimitri P. Bertsekas,et al.  Convex Optimization Theory , 2009 .

[8]  John Lygeros,et al.  Stochastic receding horizon control with output feedback and bounded controls , 2012, Autom..

[9]  F. Lewis,et al.  Reinforcement Learning and Feedback Control: Using Natural Decision Methods to Design Optimal Adaptive Controllers , 2012, IEEE Control Systems.

[10]  S. Shankar Sastry,et al.  Provably safe and robust learning-based model predictive control , 2011, Autom..

[11]  Zhuo Wang,et al.  From model-based control to data-driven control: Survey, classification and perspective , 2013, Inf. Sci..

[12]  Manfred Morari,et al.  Soft Constrained Model Predictive Control With Robust Stability Guarantees , 2014, IEEE Transactions on Automatic Control.

[13]  A. Mesbah,et al.  Stochastic Model Predictive Control: An Overview and Perspectives for Future Research , 2016, IEEE Control Systems.

[14]  Daniel Kuhn,et al.  A comment on “computational complexity of stochastic programming problems” , 2016, Math. Program..

[15]  Andreas Krause,et al.  Safe Model-based Reinforcement Learning with Stability Guarantees , 2017, NIPS.

[16]  Peter Henderson,et al.  Reproducibility of Benchmarked Deep Reinforcement Learning Tasks for Continuous Control , 2017, ArXiv.

[17]  Daniel Kuhn,et al.  Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations , 2015, Mathematical Programming.

[18]  Kim Peter Wabersich,et al.  Linear Model Predictive Safety Certification for Learning-Based Control , 2018, 2018 IEEE Conference on Decision and Control (CDC).

[19]  Vishal Gupta,et al.  Robust sample average approximation , 2014, Math. Program..

[20]  Daniel Kuhn,et al.  Regularization via Mass Transportation , 2017, J. Mach. Learn. Res..

[21]  John Lygeros,et al.  Data-Enabled Predictive Control: In the Shallows of the DeePC , 2018, 2019 18th European Control Conference (ECC).

[22]  Benjamin Recht,et al.  A Tour of Reinforcement Learning: The View from Continuous Control , 2018, Annu. Rev. Control. Robotics Auton. Syst..