Optimal Dynamic Control of a Useful Class of Randomly Jumping Processes
暂无分享,去创建一个
The purpose of the paper is to present a complete theory of optimal control of piecewise linear and piecewise monotone processes. The theory consists of a description of the processes, necessary and sufficient optimality conditions and existence and uniqueness results, as well as extremal and regularity properties of the optimal strategy. Mathematical proofs are only outlined (they will appear elsewhere), but hints concerning efficient determination of the optimal strategy are included.
Piecewise linear (monotone) processes are discontinuous Markov processes whose state components stay constant or change linearly (monotonically) between two consecutive jumps. All processes of inventory, storage, queuing, reliability and risk theory belong to these classes. The processes will be controlled by feedback (Markov) strategies based on complete state observations. The expected value of a performance functional of integral type with additional terminal costs is to be minimized.
The semigroup theory of Markov processes will be used as the uniform mathematical tool for the whole theory, and the control problem will be reduced to the integration of a system of ordinary differential equations. Special emphasis will be given to the description of the processes by their infinitesimal characteristics which are available explicitly in applied models-no finite dimensional distributions are used.