On the existence of stationary optimal policies for the average cost control problem of linear systems with abstract state-feedback

This paper establishes conditions for the existence of optimal stationary policies for a class of long-run average cost control problems. The discrete-time system is assumed to be linear with respect to the state but the controls take an abstract state-feedback structure. The derived approach may be used to represent systems where the state is observed by the controller only through some specially structures output (no history is employed). It is shown that, if there exists an optimal-abstract policy for the discounted-cost problem, and such a policy generates an autonomous system with uniform exponential decay, then there exists an optimal stationary policy for the average cost problem. Notions of controllability and observability of linear time-varying systems are imposed.

[1]  R. Hartley Stochastic Modelling and Control , 1985 .

[2]  O. Hernández-Lerma,et al.  Further topics on discrete-time Markov control processes , 1999 .

[3]  B. O. Anderson,et al.  Time-varying feedback laws for decentralized control , 1980, 1980 19th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes.

[4]  J. Tsitsiklis,et al.  NP-hardness of some linear control design problems , 1995, Proceedings of 1995 34th IEEE Conference on Decision and Control.

[5]  B. Anderson,et al.  NEW RESULTS IN LINEAR SYSTEM STABILITY , 1969 .

[6]  Onésimo Hernández-Lerma,et al.  Limiting Discounted-Cost Control of Partially Observable Stochastic Systems , 2001, SIAM J. Control. Optim..

[7]  Manfred Schäl,et al.  Average Optimality in Dynamic Programming with General State Space , 1993, Math. Oper. Res..

[8]  Alessandro N. Vargas,et al.  Average Cost and Stability of Time-Varying Linear Systems , 2010, IEEE Transactions on Automatic Control.

[9]  Sean P. Meyn The policy iteration algorithm for average reward Markov decision processes with general state space , 1997, IEEE Trans. Autom. Control..

[10]  Dimitri P. Bertsekas,et al.  Stochastic optimal control : the discrete time case , 2007 .

[11]  Petros G. Voulgaris,et al.  On optimal ℓ∞ to ℓ∞ filtering , 1995, Autom..

[12]  Chaouki T. Abdallah,et al.  Static output feedback: a survey , 1994, Proceedings of 1994 33rd IEEE Conference on Decision and Control.

[13]  P. Dorato,et al.  Static output feedback: a survey , 1994, Proceedings of 1994 33rd IEEE Conference on Decision and Control.

[14]  O. Hernández-Lerma,et al.  Average cost optimal policies for Markov control processes with Borel state space and unbounded costs , 1990 .

[15]  B. Anderson,et al.  Controllability, Observability and Stability of Linear Systems , 1968 .

[16]  Didier Henrion,et al.  Convergent relaxations of polynomial matrix inequalities and static output feedback , 2006, IEEE Transactions on Automatic Control.

[17]  M. K. Ghosh,et al.  Discrete-time controlled Markov processes with average cost criterion: a survey , 1993 .

[18]  Robert E. Skelton,et al.  Static output feedback controllers: stability and convexity , 1998, IEEE Trans. Autom. Control..

[19]  Michael Athans,et al.  Survey of decentralized control methods for large scale systems , 1978 .

[20]  Eugene A. Feinberg,et al.  Optimality Inequalities for Average Cost Markov Decision Processes and the Stochastic Cash Balance Problem , 2007, Math. Oper. Res..