This paper considers the regulation of the water levels of Lake Superior, which can be controlled at its outlet. The inflows to the lake were treated as stochastic random variables, and the objective was to find operating policies that minimized the expected undiscounted yearly losses over an infinite time horizon. The system was modeled as a periodic Markovian decision problem. A new algorithm based on White's method of successive approximations for solving single chained and completely ergodic Markovian decision problems was developed and was proven to be fairly efficient in terms of computer storage and computation time. Transition probabilities of the inflows were estimated from 64 years of data. The economic loss functions used in the model considered the losses due to navigation inconvenience and shore property damage. An extensive sensitivity analysis was conducted to determine their influence on the optimal operating policies. To show the validity of the model, the newly developed policies were tested against the current operating policy by using the historical inflow record as data. The results show that if some of the developed operating policies were adopted, the average yearly losses could be reduced by at least 15%. At the same time the monthly lake level variances could be reduced by 25%.
[1]
Jens Ove Riis,et al.
Discounted Markov Programming in a Periodic Process
,
1965
.
[2]
Amedeo R. Odoni,et al.
On Finding the Maximal Gain for Markov Decision Processes
,
1969,
Oper. Res..
[3]
D. White,et al.
Dynamic programming, Markov chains, and the method of successive approximations
,
1963
.
[4]
J. MacQueen.
A MODIFIED DYNAMIC PROGRAMMING METHOD FOR MARKOVIAN DECISION PROBLEMS
,
1966
.
[5]
Rolf A. Deininger,et al.
Generalization of White's Method of Successive Approximations to Periodic Markovian Decision Processes
,
1972,
Oper. Res..