A Utility Criterion for Markov Decision Processes

Optimality criteria for Markov decision processes have historically been based on a risk neutral formulation of the decision maker's preferences. An explicit utility formulation, incorporating both risk and time preference and based on some results in the axiomatic theory of choice under uncertainty, is developed. This forms an optimality criterion called utility optimality with constant aversion to risk. The objective is to maximize the expected utility using an exponential utility function. Implicit in the formulation is an interpretation of the decision process which is not sequential. It is shown that optimal policies exist which are not necessarily stationary for an infinite horizon stationary Markov decision process with finite state and action spaces. An example is given.