A simulation of dynamic task allocation in a distributed computer system

Distributed processor systems are currently used for advanced, high-speed computation in application areas such as image processing, artificial intelligence, signal processing, and general data processing. The use of distributed and parallel processor computer systems today requires systems designers to partition an application into at least as many functions as there are processors. Spare processors must be allocated and function migration paths must be designed to allow fault tolerant reconfiguration. The parallel process/ parallel architecture control simulation (PPCS) models parallel task allocation on a distributed processor architecture. Parallel task allocation is a first step in designing a dynamic parallel processor operating system that automatically assigns and reassigns application tasks to processors. Advantages of this approach are: dynamic reconfigurability removing the need for spare processing power reserved for failures; the reduced need for fallback and recovery software for fault detection; more optimized partitioning of functions; and better load balancing over available processors. PPCS models various distributed processing configurations, task dependencies, and the scheduling of the tasks onto the processor architecture. The PPCS system implements fifteen different heuristic scheduling algorithms to map a set of tasks onto the processing nodes of a distributed computer. The simulation shows the feasibility of using fast algorithms to heuristically schedule a system of multiple processors allowing dynamic task allocation.

[1]  Harold S. Stone,et al.  Parallel Processing with the Perfect Shuffle , 1971, IEEE Transactions on Computers.

[2]  Krithi Ramamritham,et al.  Evaluation of a flexible task scheduling algorithm for distributed hard real-time systems , 1985, IEEE Transactions on Computers.

[3]  David S. Johnson,et al.  Fast Algorithms for Bin Packing , 1974, J. Comput. Syst. Sci..

[4]  G. Jack Lipovski,et al.  A hardware support mechanism for scheduling resources in a parallel machine environment , 1981, ISCA '81.

[5]  Theodore R. Bashkow,et al.  A large scale, homogeneous, fully distributed parallel machine, I , 1977, ISCA '77.

[6]  Gary J. Nutt A Parallel Processor Operating System Comparison , 1977, IEEE Transactions on Software Engineering.

[7]  Chien-Chung Shen,et al.  A Graph Matching Approach to Optimal Task Assignment in Distributed Computing Systems Using a Minimax Criterion , 1985, IEEE Trans. Computers.

[8]  Leslie Jill Miller A heterogeneous multiprocessor design and the distributed scheduling of its task group workload , 1982, ISCA 1982.

[9]  Ronald L. Graham,et al.  Bounds on multiprocessing anomalies and related packing algorithms , 1972, AFIPS '72 (Spring).

[10]  William W. Wadge,et al.  Lucid, the dataflow programming language , 1985 .

[11]  Alumkal Thampy Thomas Scheduling of multiconfigurable pipelines , 1977 .

[12]  Ronald L. Graham,et al.  Bounds on Multiprocessing Timing Anomalies , 1969, SIAM Journal of Applied Mathematics.

[13]  David May OCCAM , 1983, SIGP.

[14]  Gerald Estrin,et al.  Path Length Computations on Graph Models of Computations , 1969, IEEE Transactions on Computers.

[15]  C. L. Liu,et al.  On a Class of Scheduling Algorithms for Multiprocessors Computing Systems , 1974, Sagamore Computer Conference.

[16]  Alfred V. Aho,et al.  The Design and Analysis of Computer Algorithms , 1974 .

[17]  Leslie Jill Miller A heterogeneous multiprocessor design and the distributed scheduling of its task group workload , 1982, ISCA.

[18]  Jeffrey D. Ullman,et al.  Worst-Case Performance Bounds for Simple One-Dimensional Packing Algorithms , 1974, SIAM J. Comput..

[19]  Jeffrey D. Ullman,et al.  Polynomial complete scheduling problems , 1973, SOSP '73.