Average optimality for risk-sensitive control with general state space

This paper deals with discrete-time Markov control processes on a general state space. A long-run risk-sensitive average cost criterion is used as a performance measure. The one-step cost function is nonnegative and possibly unbounded. Using the vanishing discount factor approach, the optimality inequality and an optimal stationary strategy for the decision maker are established.

[1]  J. Neveu,et al.  Mathematical foundations of the calculus of probability , 1965 .

[2]  Rhodes,et al.  Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic differential games , 1973 .

[3]  L. Brown,et al.  Measurable Selections of Extrema , 1973 .

[4]  Manfred SchÄl,et al.  Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal , 1975 .

[5]  R. Cavazos-Cadena A counterexample on the optimality equation in Markov decision chains with the average cost criterion , 1991 .

[6]  Manfred Schäl,et al.  Average Optimality in Dynamic Programming with General State Space , 1993, Math. Oper. Res..

[7]  Remo Guidieri Res , 1995, RES: Anthropology and Aesthetics.

[8]  S. Marcus,et al.  Risk sensitive control of Markov processes in countable state space , 1996 .

[9]  J. Filar,et al.  Competitive Markov Decision Processes , 1996 .

[10]  Wolfgang J. Runggaldier,et al.  Connections between stochastic control and dynamic games , 1996, Math. Control. Signals Syst..

[11]  W. Fleming,et al.  Risk sensitive control of finite state machines on an infinite horizon. I , 1997, Proceedings of the 36th IEEE Conference on Decision and Control.

[12]  Daniel Hernández-Hernández,et al.  Risk Sensitive Markov Decision Processes , 1997 .

[13]  J. Lynch,et al.  A weak convergence approach to the theory of large deviations , 1997 .

[14]  L. Sennott Stochastic Dynamic Programming and the Control of Queueing Systems , 1998 .

[15]  Lukasz Stettner,et al.  Risk sensitive portfolio optimization , 1999, Math. Methods Oper. Res..

[16]  Rolando Cavazos-Cadena,et al.  Controlled Markov chains with risk-sensitive criteria: Average cost, optimality equations, and optimal solutions , 1999, Math. Methods Oper. Res..

[17]  Lukasz Stettner,et al.  Risk-Sensitive Control of Discrete-Time Markov Processes with Infinite Horizon , 1999, SIAM J. Control. Optim..

[18]  S. Marcus,et al.  Existence of Risk-Sensitive Optimal Stationary Policies for Controlled Markov Processes , 1999 .

[19]  Ł. Stettner,et al.  Infinite horizon risk sensitive control of discrete time Markov processes with small risk , 2000 .

[20]  S. Balajiy,et al.  Multiplicative Ergodicity and Large Deviations for an Irreducible Markov Chain , 2000 .

[21]  Sean P. Meyn,et al.  Risk-Sensitive Optimal Control for Markov Decision Processes with Monotone Cost , 2002, Math. Oper. Res..

[22]  A. Nowak,et al.  On the optimality equation for average cost Markov control processes with Feller transition probabilities , 2006 .

[23]  Anna Jaskiewicz,et al.  Zero-Sum Ergodic Stochastic Games with Feller Transition Probabilities , 2006, SIAM J. Control. Optim..

[24]  Anna Jaskiewicz,et al.  A note on risk-sensitive control of invariant models , 2007, Syst. Control. Lett..

[25]  Kellen Petersen August Real Analysis , 2009 .

[26]  S. Balajia,et al.  Multiplicative ergodicity and large deviations for an irreducible Markov chain ( , .