Optimal control of piecewise deterministic markov process

The trajectories of piecewise deterministic Markov processes are solutions of an ordinary (vector)differential equation with possible random jumps between the different integral curves. Both continuous deterministic motion and the random jumps of the processes are controlled in order to minimize the expected value of a performance functional consisting of continuous, jump and terminal costs. A limiting form of the Hamilton-Jacobi-Bellman partial differential equation is shown to be a necessary and sufficient optimality condition. The existence of an optimal strategy is proved and acharacterization of the value function as supremum of smooth subsolutions is also given. The approach consists of imbedding the original control problem tightly in a convex mathematical programming problem on the space of measures and then solving the latter by dualit

[1]  J. Doob Stochastic processes , 1953 .

[2]  N. Krasovskii,et al.  Analytical design of controllers in stochastic systems with velocity-limited controlling action , 1961 .

[3]  Harold J. Kushner,et al.  Optimal stochastic control , 1962 .

[4]  É. A. Lidskii Optimal control of systems with random properties , 1963 .

[5]  J. J. Florentin,et al.  Optimal Control of Systems With Generalized Poisson Inputs , 1963 .

[6]  V. V. Tokarev Influence of random deviations from the optimal thrust program on the motion in a gravitational field of a variablemass body with constant power consumption , 1963 .

[7]  R. Rockafellar Extension of Fenchel’ duality theorem for convex functions , 1966 .

[8]  A. T. Bharucha-Reid,et al.  Probabilistic methods in applied mathematics , 1968 .

[9]  D. Sworder Feedback control of a class of linear systems with jump parameters , 1969 .

[10]  L. Young,et al.  Lectures on the Calculus of Variations and Optimal Control Theory. , 1971 .

[11]  J. Warga Optimal control of differential and functional equations , 1972 .

[12]  M. Bartlett,et al.  Markov Processes and Potential Theory , 1972, The Mathematical Gazette.

[13]  R. Rishel Dynamic Programming and Minimum Principles for Systems with Jump Markov Disturbances , 1975 .

[14]  Richard M. Lewis,et al.  A Necessary and Sufficient Condition for Optimality of Dynamic Programming Type, Making No a Priori Assumptions on the Controls , 1978 .

[15]  Richard B. Vinter,et al.  The Equivalence of Strong and Weak Formulations for Certain Problems in Optimal Control , 1978 .

[16]  L. Young Lectures on the Calculus of Variations and Optimal Control Theory , 1980 .

[17]  D. Ball Workshop on Innovation Policy and Firm Strategy held at the International Institute for Applied Systems Analysis, Laxenburg, Austria from 4–6 December 1979 , 1980 .

[18]  A. A. Yushkevich,et al.  Continuous time markov decision processes with interventions , 1983 .

[19]  Mark H. A. Davis Piecewise‐Deterministic Markov Processes: A General Class of Non‐Diffusion Stochastic Models , 1984 .

[20]  Frank A. Van der Duyn Schouten Markov decision drift processes , 1986 .