Control of piecewise-deterministic processes via discrete-time dynamic programming

Controlled piecewise-deterministic Markov processes have deterministic trajectories punctuated by random jumps, at which the sample path is right-continuous. By considering the sequence of states visited by the process at its jump times, it is shown that a discounted infinite horizon control problem can be reformulated as a discrete-time Markov decision problem (the ‘positive’ case). Under certain continuity assumptions it is shown that an optimal stationary policy exists in relaxed controls.