This paper presents an algorithm which prevents a simulation user from choosing a simulation length. This choice is always tricky and often leads to CPU-time waste, not to mention user-time waste. Too often, simulation users forget to compute confidence intervals: they only guess a simulation length and ignore the confidence on the simulation results. Those who do compute them generally try several lengths (and thus run several simulations) so as to obtain small enough confidence intervals. The algorithm aims at optimizing this length choice by running only one simulation and by stopping it nearly as soon as possible, i.e. when some predefined relative confidence intervals on each of the performance criteria are reached. For this purpose, the confidence intervals are periodically computed, at run-time, with the batch mean method. According to these intermediate results and to estimators properties, a mobile simulation length is (also periodically) predicted. The algorithm automatically determines batch size and batches number. This process goes on until all confidence intervals are smaller than the predefined thresholds. This algorithm is implemented in MIMESIS, a computer architecture performance evaluation tool.
[1]
Marvin K. Nakayama,et al.
Two-stage stopping procedures based on standardized time series
,
1994
.
[2]
Averill M. Law,et al.
Confidence Intervals for Steady-State Simulations: I. A Survey of Fixed Sample Size Procedures
,
1984,
Oper. Res..
[3]
Monique Becker,et al.
Optimal simulation lengths for various algorithms computing the mean
,
1978
.
[4]
Halim Damerdji,et al.
Strong Consistency of the Variance Estimator in Steady-State Simulation Output Analysis
,
1994,
Math. Oper. Res..
[5]
Chiahon Chien,et al.
Batch size selection for the batch means method
,
1994,
Proceedings of Winter Simulation Conference.
[6]
Leonard Kleinrock,et al.
Queueing Systems: Volume I-Theory
,
1975
.