The ideas described in this paper arise from a particular investigation of the general question: when information relevant to a sequence of decisions is collected by statistical sampling, is it worth controlling the local rate of sampling to provide more accurate information at certain times? Here, the value of any sampling and decision procedure is measured by the long-term average of all information and decision costs. The model is concerned with the design of a control chart and it leads to a Markovian decision problem with three possible actions at each point of the state space. The determination of an optimal policy depends on the solution of a complicated free boundary problem. Although there is a well-established relation between the basic partial differential equation of this problem and Brownian motion, the investigation raises many questions, both analytical and probabilistic, which remain to be answered. However, some limited results are obtained by examining special formal solutions. In spite of serious gaps in the general theory, it is possible to establish useful bounds on the minimum average cost which can be attained.
[1]
Ronald A. Howard,et al.
Dynamic Programming and Markov Processes
,
1960
.
[2]
H. M. Taylor.
Optimal Stopping in a Markov Process
,
1968
.
[3]
I. Richard Savage,et al.
SURVEILLANCE PROBLEMS: WIENER PROCESSES,
,
1965
.
[4]
D. Blackwell.
Discrete Dynamic Programming
,
1962
.
[5]
A. Bensoussan,et al.
ON CERTAIN QUESTIONS RELATED TO OPTIMAL CONTROL
,
1974
.
[6]
J. L. Doob,et al.
A probability approach to the heat equation
,
1955
.
[7]
J. Bather.
Control Charts and the Minimization of Costs
,
1963
.
[8]
John Bather.
Optimal stopping problems for Brownian motion
,
1970,
Advances in Applied Probability.