More Risk-Sensitive Markov Decision Processes

We investigate the problem of minimizing a certainty equivalent of the total or discounted cost over a finite and an infinite horizon that is generated by a Markov decision process MDP. In contrast to a risk-neutral decision maker this optimization criterion takes the variability of the cost into account. It contains as a special case the classical risk-sensitive optimization criterion with an exponential utility. We show that this optimization problem can be solved by an ordinary MDP with extended state space and give conditions under which an optimal policy exists. In the case of an infinite time horizon we show that the minimal discounted cost can be obtained by value iteration and can be characterized as the unique solution of a fixed-point equation using a “sandwich” argument. Interestingly, it turns out that in the case of a power utility, the problem simplifies and is of similar complexity than the exponential utility case, however has not been treated in the literature so far. We also establish the validity and convergence of the policy improvement method. A simple numerical example, namely, the classical repeated casino game, is considered to illustrate the influence of the certainty equivalent and its parameters. Finally, the average cost problem is also investigated. Surprisingly, it turns out that under suitable recurrence conditions on the MDP for convex power utility, the minimal average cost does not depend on the parameter of the utility function and is equal to the risk-neutral average cost. This is in contrast to the classical risk-sensitive criterion with exponential utility.

[1]  E. J. Collins,et al.  Finite-horizon dynamic optimisation when the terminal reward is a concave functional of the distribution of the final state , 1998, Advances in Applied Probability.

[2]  S. C. Jaquette A Utility Criterion for Markov Decision Processes , 1976 .

[3]  Jonathan Theodor Ott A Markov Decision Model for a Surveillance Application and Risk-Sensitive Markov Decision Processes , 2010 .

[4]  Seiichi Iwamoto Stochastic optimization of forward recursive functions , 2004 .

[5]  Daniel Hernández-Hernández,et al.  Risk Sensitive Markov Decision Processes , 1997 .

[6]  Nicole Bäuerle,et al.  Markov Decision Processes with Average-Value-at-Risk criteria , 2011, Math. Methods Oper. Res..

[7]  Jan Dhaene,et al.  Modern Actuarial Risk Theory , 2001 .

[8]  Anna Ja 'skiewicz Average optimality for risk-sensitive control with general state space , 2007 .

[9]  S. Marcus,et al.  Existence of Risk-Sensitive Optimal Stationary Policies for Controlled Markov Processes , 1999 .

[10]  Nicole Bäuerle,et al.  Dynamic mean-risk optimization in a binomial model , 2009, Math. Methods Oper. Res..

[11]  Congbin Wu,et al.  Minimizing risk models in Markov decision processes with policies depending on target values , 1999 .

[12]  M. J. Sobel,et al.  Discounted MDP's: distribution functions and exponential utility maximization , 1987 .

[13]  David M. Kreps Decision Problems with Expected Utility Criteria, II: Stationarity , 1977, Math. Oper. Res..

[14]  S. C. Jaquette Markov Decision Processes with a New Optimality Criterion: Discrete Time , 1973 .

[15]  Rolando Cavazos-Cadena,et al.  The vanishing discount approach in Markov chains with risk-sensitive criteria , 2000, IEEE Trans. Autom. Control..

[16]  M. Teboulle,et al.  AN OLD‐NEW CONCEPT OF CONVEX RISK MEASURES: THE OPTIMIZED CERTAINTY EQUIVALENT , 2007 .

[17]  Uriel G. Rothblum,et al.  Risk-Sensitive and Risk-Neutral Multiarmed Bandits , 2007, Math. Oper. Res..

[18]  Eugene A. Feinberg,et al.  Average Cost Markov Decision Processes with Weakly Continuous Transition Probabilities , 2012, Math. Oper. Res..

[19]  Daniel Hernández-Hernández,et al.  Discounted Approximations for Risk-Sensitive Average Criteria in Markov Decision Chains with Finite State Space , 2011, Math. Oper. Res..

[20]  K. Hinderer,et al.  Foundations of Non-stationary Dynamic Programming with Discrete Time Parameter , 1970 .

[21]  Jerzy A. Filar,et al.  Stochastic target hitting time and the problem of early retirement , 2004, IEEE Transactions on Automatic Control.

[22]  Richard L. Tweedie,et al.  Markov Chains and Stochastic Stability , 1993, Communications and Control Engineering Series.

[23]  Giovanni Parmigiani,et al.  Utility and Means in the 1930s , 1993 .

[24]  Daniel Hernández-Hernández,et al.  Risk sensitive control of finite state Markov chains in discrete time, with applications to portfolio management , 1999, Math. Methods Oper. Res..

[25]  Andrew E. B. Lim,et al.  Risk-sensitive control with HARA utility , 2001, IEEE Trans. Autom. Control..

[26]  Christiane Barz,et al.  Risk-sensitive capacity control in revenue management , 2007, Math. Methods Oper. Res..

[27]  L. Sennott Stochastic Dynamic Programming and the Control of Queueing Systems , 1998 .

[28]  David M. Kreps Decision Problems with Expected Utility Critera, I: Upper and Lower Convergent Utility , 1977, Math. Oper. Res..

[29]  Tomasz R. Bielecki,et al.  Economic Properties of the Risk Sensitive Criterion for Portfolio Management , 2003 .

[30]  D. White Mean, variance, and probabilistic criteria in finite Markov decision processes: A review , 1988 .

[31]  Lukasz Stettner,et al.  Risk-Sensitive Control of Discrete-Time Markov Processes with Infinite Horizon , 1999, SIAM J. Control. Optim..

[32]  Michel Denuit,et al.  Modern actuarial risk theory (Chinese translation) , 2005 .

[33]  Andrzej Ruszczynski Risk-averse dynamic programming for Markov decision processes , 2010, Math. Program..

[34]  U. Rieder,et al.  Markov Decision Processes with Applications to Finance , 2011 .

[35]  Uriel G. Rothblum,et al.  The multi-armed bandit, with constraints , 2012, PERV.