Perturbations for Adaptive Regulation and Learning

Design of adaptive algorithms for simultaneous regulation and estimation of MIMO linear dynamical systems is a canonical reinforcement learning problem. Efficient policies whose regret (i.e. increase in the cost due to uncertainty) scales at a squareroot rate of time have been studied extensively in the recent literature. Nevertheless, existing strategies are computationally intractable and require a priori knowledge of key system parameters. The only exception is a randomized Greedy regulator, for which asymptotic regret bounds have been recently established. However, randomized Greedy leads to probable fluctuations in the trajectory of the system, which renders its finite time regret suboptimal. This work addresses the above issues by designing policies that utilize input signals perturbations. We show that perturbed Greedy guarantees non-asymptotic regret bounds of (nearly) square-root magnitude w.r.t. time. More generally, we establish high probability bounds on both the regret and the learning accuracy under arbitrary input perturbations. The settings where Greedy attains the information theoretic lower bound of logarithmic regret are also discussed. To obtain the results, stateof-the-art tools from martingale theory together with the recently introduced method of policy decomposition are leveraged. Beside adaptive regulators, analysis of input perturbations captures key applications including remote sensing and distributed control.

[1]  Ambuj Tewari,et al.  On Optimality of Adaptive Linear-Quadratic Regulators , 2018, ArXiv.

[2]  T. Lai,et al.  Least Squares Estimates in Stochastic Regression Models with Applications to Identification and Control of Dynamic Systems , 1982 .

[3]  Sean P. Meyn Control Techniques for Complex Networks: Workload , 2007 .

[4]  P. Kumar,et al.  Convergence of adaptive control schemes using least-squares parameter estimates , 1990 .

[5]  Joel A. Tropp,et al.  User-Friendly Tail Bounds for Sums of Random Matrices , 2010, Found. Comput. Math..

[6]  T. Lai,et al.  Parallel recursive algorithms in asymptotically efficient adaptive control of linear stochastic systems , 1991 .

[7]  Ambuj Tewari,et al.  Optimistic Linear Programming gives Logarithmic Regret for Irreducible MDPs , 2007, NIPS.

[8]  Ambuj Tewari,et al.  Optimality of Fast-Matching Algorithms for Random Networks With Applications to Structural Controllability , 2015, IEEE Transactions on Control of Network Systems.

[9]  T. Söderström Discrete-Time Stochastic Systems: Estimation and Control , 1995 .

[10]  Sean P. Meyn,et al.  Distributed Control Design for Balancing the Grid Using Flexible Loads , 2018 .

[11]  T. Lai,et al.  Extended least squares and their applications to adaptive control and prediction in linear systems , 1986 .

[12]  T. L. Lai Andherbertrobbins Asymptotically Efficient Adaptive Allocation Rules , 2022 .

[13]  S. Bittanti,et al.  ADAPTIVE CONTROL OF LINEAR TIME INVARIANT SYSTEMS: THE "BET ON THE BEST" PRINCIPLE ∗ , 2006 .

[14]  Adel Javanmard,et al.  Efficient Reinforcement Learning for High Dimensional Linear Quadratic Systems , 2012, NIPS.

[15]  Mohamad Kazem Shirani Faradonbeh,et al.  Finite Time Adaptive Stabilization of LQ Systems , 2018 .

[16]  Csaba Szepesvári,et al.  Improved Algorithms for Linear Stochastic Bandits , 2011, NIPS.

[17]  Mohamad Kazem Shirani Faradonbeh,et al.  Regret Analysis for Adaptive Linear-Quadratic Policies , 2017 .

[18]  B. Bercu Weighted estimation and tracking for ARMAX models , 1992, [1992] Proceedings of the 31st IEEE Conference on Decision and Control.

[19]  Alessandro Lazaric,et al.  Improved Regret Bounds for Thompson Sampling in Linear Quadratic Control Problems , 2018, ICML.

[20]  Ambuj Tewari,et al.  Finite Time Identification in Unstable Linear Systems , 2017, Autom..

[21]  Sham M. Kakade,et al.  Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator , 2018, ICML.

[22]  Riccardo Marino,et al.  Nonlinear control design: geometric, adaptive and robust , 1995 .

[23]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[24]  James Lam,et al.  Stabilization of Discrete-Time Nonlinear Uncertain Systems by Feedback Based on LS Algorithm , 2013, SIAM J. Control. Optim..

[25]  Lei Guo,et al.  Convergence and logarithm laws of self-tuning regulators , 1995, Autom..

[26]  P. Kumar,et al.  Adaptive Linear Quadratic Gaussian Control: The Cost-Biased Approach Revisited , 1998 .

[27]  Jan Willem Polderman,et al.  On the necessity of identifying the true parameter in adaptive LQ control , 1986 .

[28]  Csaba Szepesvári,et al.  Regret Bounds for the Adaptive Control of Linear Quadratic Systems , 2011, COLT.

[29]  Jan Willem Polderman,et al.  A note on the structure of two subsets of the parameter space in adaptive control problems , 1986 .

[30]  A. Timmermann,et al.  Small Sample Properties of Forecasts from Autoregressive Models Under Structural Breaks , 2003, SSRN Electronic Journal.

[31]  Han-Fu Chen,et al.  Convergence rates in stochastic adaptive tracking , 1989 .

[32]  Han-Fu Chen,et al.  The AAstrom-Wittenmark self-tuning regulator revisited and ELS-based adaptive trackers , 1991 .

[33]  T. Lai Asymptotically efficient adaptive control in stochastic regression models , 1986 .

[34]  Chaouki T. Abdallah,et al.  Linear Quadratic Control: An Introduction , 2000 .

[35]  Daphna Weinshall,et al.  Online Learning in the Embedded Manifold of Low-rank Matrices , 2012, J. Mach. Learn. Res..

[36]  Tamer Basar,et al.  Optimal control of LTI systems over unreliable communication links , 2006, Autom..