A CPU utilization limit for massively parallel MIMD computers

Massively parallel computer systems based on off-the-shelf CPU chip-sets have become commercially available. The authors demonstrate a theoretical limit on the silicon (or other circuitry media) utilization of such architectures as the number of processors is scaled up. In addition, case studies of the Thinking Machines Corporation CM-5 and of the Intel Touchstone are presented in order to quantify the maximum utilization on existing machines. Based on this utilization limit, the authors examine whether computer architects' current reliance on the MIMD (multiple-instruction multiple-data) model will be practical in next-generation machines. In order to facilitate the analysis, they decouple the control parallel and data parallel models of computation from MIMD and SIMD (single-instruction multiple-data) target platforms, respectively. Utilization of control parallel paradigms executing on SIMD platforms is introduced for comparison. The authors also consider the relationship of communication overhead to machine size scaling in the presence of the need for virtual processing nodes.<<ETX>>

[1]  W. Daniel Hillis,et al.  The connection machine , 1985 .

[2]  Gary Demos Issues in Applying Massively Parallel Computing Power , 1990, Int. J. High Perform. Comput. Appl..

[3]  Joel H. Saltz,et al.  Reduction of the effects of the communication delays in scientific algorithms on message passing MIMD architectures , 1985, PPSC.

[4]  R. Ruhl,et al.  Balancing interprocessor communication and computation on torus-connected multicomputers running compiler-parallelized code , 1992, Proceedings Scalable High Performance Computing Conference SHPCC-92..

[5]  Paul Hudak,et al.  Graphinators and the duality of SIMD and MIMD , 1988, LISP and Functional Programming.

[6]  David Elliot Shaw The NON-VON Supercomputer , 1982 .

[7]  Tom Blank,et al.  The MasPar MP-1 architecture , 1990, Digest of Papers Compcon Spring '90. Thirty-Fifth IEEE Computer Society International Conference on Intellectual Leverage.

[8]  Dieter Müller-Wichards Problem size scaling in the presence of parallel overhead , 1991, Parallel Comput..

[9]  E. L. Cloud,et al.  The geometric arithmetic parallel processor , 1988, Proceedings., 2nd Symposium on the Frontiers of Massively Parallel Computation.

[10]  S. F. Reddaway DAP—a distributed array processor , 1973, ISCA 1973.

[11]  David M. Nicol,et al.  Problem Size, Parallel Architecture, and Optimal Speedup , 1987, J. Parallel Distributed Comput..

[12]  Henry G. Dietz,et al.  Common Subexpression Induction , 1992, ICPP.

[13]  T. Bridges The GPA machine: a generally partitionable MSIMD architecture , 1990, [1990 Proceedings] The Third Symposium on the Frontiers of Massively Parallel Computation.

[14]  M. Annaratone,et al.  Interprocessor communication speed and performance in distributed-memory parallel processors , 1989, ISCA '89.

[15]  Alan H. Karp,et al.  Gordon Bell prize lectures , 1991, Proceedings of the 1991 ACM/IEEE Conference on Supercomputing (Supercomputing '91).

[16]  Anant Agarwal,et al.  Scalability of parallel machines , 1991, CACM.

[17]  Horst D. Simon Are Highly Parallel Systems Ready for Prime Time? , 1990, Int. J. High Perform. Comput. Appl..