An event takes place at time t, a discrete random variable with known probability function. At unit intervals of time, a measurement x is observed which yields information about the event; x is a random variable, with a known probability density function being dependent upon whether or not the event has yet occurred. After each observation, a decision is made that the event has or has not yet occurred. The latter decision implies waiting for the next measurement. The former decision, if correct, ends the procedure. If incorrect, this fact is incorporated, and the procedure continues. A decision cost structure is assumed that assigns: (1) a fixed (false alarm) cost to deciding the event has occurred when, in fact, it has not; (2) a (time late) cost proportional to the time between the occurrence of the event and the decision that it has occurred. The minimum-expected-cost decision strategy and the minimum cost thus obtained are derived by means of dynamic programming.
[1]
M. A. Girshick,et al.
Theory of games and statistical decisions
,
1955
.
[2]
R Bellman,et al.
DYNAMIC PROGRAMMING, SEQUENTIAL ESTIMATION AND SEQUENTIAL DETECTION PROCESSES.
,
1961,
Proceedings of the National Academy of Sciences of the United States of America.
[3]
Harry Herbert Goode.
DEFERRED DECISION THEORY
,
1961
.
[4]
Richard E. Barlow,et al.
Optimum Checking Procedures
,
1963
.
[5]
Stephen M. Pollock,et al.
SEQUENTIAL SEARCH AND DETECTION
,
1964
.
[6]
Ronald A. Howard,et al.
Dynamic Programming
,
1966
.