Global virtual time (GVT) is used in distributed simulations to reclaim memory, commit output, detect termination, and handle errors. It is a global function that is computed many times during the course of a simulation. A small GVT latency (delay between its occurrence and detection) allows for more efficient use of resources. We present an algorithm which minimizes the latency, and we prove its correctness. The algorithm is unique in that a target virtual time (TVT) is predetermined by an initiator who then detects when GVT ≥ TVT. This approach eliminates the avalanche effect because the collection phase is spread out over time, and it allows for regular and timely GVT updates. The algorithm does not require messages to be acknowledged, which significantly reduces the message overhead of the simulation. One possible application is with interactive simulators, where regular and timely updates would produce output that is up to date and appears smooth.
[1]
David R. Jefferson,et al.
Virtual time
,
1985,
ICPP.
[2]
R. M. Fujimoto,et al.
Parallel discrete event simulation
,
1989,
WSC '89.
[3]
Richard M. Fujimoto,et al.
Parallel discrete event simulation
,
1990,
CACM.
[4]
K. Mani Chandy,et al.
Asynchronous distributed simulation via a sequence of parallel computations
,
1981,
CACM.
[5]
Vijay K. Garg,et al.
Detection of Unstable Predicates in Distributed Programs
,
1992,
FSTTCS.
[6]
Gerard Tel,et al.
Topics in distributed algorithms
,
1991
.
[7]
Brian Beckman,et al.
Time warp operating system
,
1987,
SOSP '87.
[8]
Paul F. Reynolds.
A spectrum of options for parallel simulation
,
1988,
WSC '88.