Linear Programming and Average Optimality of Markov Control Processes on Borel Spaces---Unbounded Costs

This paper is concerned with the linear programming formulation of Markov control processes with Borel state and action spaces, and the average cost (AC) criterion. The one-stage cost function may be unbounded. A linear program $EP$ and its dual, $EP^*$, are introduced. Their values, $\inf EP$ and $\sup EP^*$, bound the value (say, $\inf AC$) of the $AC$ problem, i.e., $\sup EP^{*} \leq \inf AC \leq \inf EP$. Conditions are provided for the existence of no duality gap, viz., $\sup EP^{*}=\inf EP$, and also for strong duality, so that both $EP$ and $EP^*$ are solvable and their optimal values satisfy $\max EP^{*} = \min EP$. The latter implies (i) the existence of an $AC$-optimal control policy and that (ii) the $AC$ optimality equation holds almost everywhere. These results are applied to a general vector-valued, additive-noise system with quadratic costs.