Computational models for parallel computers

Computational models define the usage patterns of a computer. They can be used to derive the architecture of the machine, provide guidelines for programming tools, and suggest how the machine should be used in applications. Identifying computational models is especially important for parallel computers, because their architectures and usages are still not well understood in general. This paper describes a number of computational models for parallel computers. These models characterize the communication patterns under which processors exchange their intermediate results during computation. Emphases are placed upon models for one-dimensional processor arrays, reflecting Carnegie Mellon’s experiences with the Warp systolic array machine. These models include local computation, domain partition, pipeline, multifunction pipeline and ring.

[1]  Parag A. Pathak,et al.  Massachusetts Institute of Technology , 1964, Nature.

[2]  Louis A. Hageman,et al.  Iterative Solution of Large Linear Systems. , 1971 .

[3]  Alfred V. Aho,et al.  The Design and Analysis of Computer Algorithms , 1974 .

[4]  Azriel Rosenfeld,et al.  Scene Labeling by Relaxation Operations , 1976, IEEE Transactions on Systems, Man, and Cybernetics.

[5]  Gérard M. Baudet,et al.  Optimal Sorting Algorithms for Parallel Computers , 1978, IEEE Transactions on Computers.

[6]  Azriel Rosenfeld,et al.  Iterative methods in image analysis , 1978, Pattern Recognit..

[7]  H. T. Kung,et al.  Systolic Arrays for (VLSI). , 1978 .

[8]  H. T. Kung Let's Design Algorithms for VLSI Systems , 1979 .

[9]  H. T. Kung Why systolic architectures? , 1982, Computer.

[10]  H. T. Kung Systolic algorithms for the CMU warp processor , 1984 .

[11]  H. T. Kung,et al.  Warp as a machine for low-level vision , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[12]  Michael Ian Shamos,et al.  Computational geometry: an introduction , 1985 .

[13]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[14]  Thomas R. Gross,et al.  Compilation for a high-performance systolic array , 1986, SIGPLAN '86.

[15]  H. T. Kung,et al.  Using warp as a supercomputer in signal processing , 1986, ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing.

[16]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[17]  J. A. Webb,et al.  Low-level vision on Warp and the apply programming model. Technical report , 1987 .

[18]  H. T. Kung,et al.  The Warp Computer: Architecture, Implementation, and Performance , 1987, IEEE Transactions on Computers.

[19]  H. T. Kung Systolic communication , 1988, [1988] Proceedings. International Conference on Systolic Arrays.

[20]  D. S. Touretzky,et al.  Neural network simulation at Warp speed: how we got 17 million connections per second , 1988, IEEE 1988 International Conference on Neural Networks.

[21]  H. T. Kung,et al.  Path Planning On The Warp Computer: Using A Linear Systolic Array In Dynamic Programming , 1988, Optics & Photonics.

[22]  Monica Sin-Ling Lam,et al.  A Systolic Array Optimizing Compiler , 1989 .