Scheduling coarse-grain tasks, e.g. metaprograms on a grid, uses estimation of the execution times of individual components to compute optimal schedules. Various factors (hazards) lead to estimation errors, which affect both the performance of the schedule and its resource utilization. We introduce the concept of robustness of a schedule and present an analysis technique to determine the chance that a metaprogram exceeds its execution time due to components outside its critical path. The results of this analysis are used to compute schedules less sensitive to hazards. This translates into more accurate reservation requirements for critical systems, and reduced expected execution time for non-critical metaprograms executed repeatedly. We introduce the concept of the entropy of a schedule and conjecture that a more robust schedule is one that minimizes the entropy of a schedule. Copyright © 2002 John Wiley & Sons, Ltd.
[1]
E.L. Lawler,et al.
Optimization and Approximation in Deterministic Sequencing and Scheduling: a Survey
,
1977
.
[2]
Thomas L. Casavant,et al.
A Taxonomy of Scheduling in General-Purpose Distributed Computing Systems
,
1988,
IEEE Trans. Software Eng..
[3]
Dimitri P. Bertsekas,et al.
Linear network optimization - algorithms and codes
,
1991
.
[4]
Ranga S. Ramanujan,et al.
On-line use of off-line derived mappings for iterative automatic target recognition tasks and a particular class of hardware platforms
,
1997,
Proceedings Sixth Heterogeneous Computing Workshop (HCW'97).
[5]
Dan C. Marinescu,et al.
Dynamic Scheduling of Process Groups
,
1998
.
[6]
Ladislau Bölöni,et al.
On the robustness of metaprogram schedules
,
1999,
Proceedings. Eighth Heterogeneous Computing Workshop (HCW'99).