A simple example of intravenous theophylline therapy is used to present and compare various drug administration policies based on stochastic control theory. The simplest approach (Heuristic-Certainty-Equivalence (HCE) control) assumes that the model parameters are known. Prior uncertainty on these parameters can be taken into account by using average optimal (AO) control. The available knowledge about the system can be improved by measuring the drug concentration some time after the beginning of the treatment. This corresponds to the notion of feedback and leads to the HCE feedback (HCEF) and AO feedback (AOF) policies. A further step towards optimality consists in choosing the optimal measurement time given that the final purpose is the control of the system and not the estimation of its parameters. Finally, closed-loop optimal (CLO) control optimally chooses both the dosage regimen and measurement time.
[1]
D. S. Bayard.
Proof of quasi-adaptivity for the m-measurement feedback class of stochastic control policies
,
1987
.
[2]
Lennart Ljung,et al.
Theory and applications of self-tuning regulators
,
1977,
Autom..
[3]
Björn Wittenmark,et al.
Stochastic adaptive control methods: a survey
,
1975
.
[4]
V. Wertz,et al.
Adaptive Optimal Control: The Thinking Man's G.P.C.
,
1991
.
[5]
S. Dreyfus.
Dynamic Programming and the Calculus of Variations
,
1960
.
[6]
Eric R. Ziegel,et al.
Model-Oriented Data Analysis
,
1990
.