Data layouts impacts on the compilation of the communications for a synchronous MSIMD machine

Abstract This paper presents the impact of data layout on compilation of communications. Our methods are developed for communication networks with high speed rate. The purpose of these methods is to synchronize communications and computations to reduce latency and memory space.

[1]  Alan H. Karp,et al.  Programming for Parallelism , 1987, Computer.

[2]  Guy L. Steele,et al.  Data Parallel Computers and the FORALL Statement , 1991, J. Parallel Distributed Comput..

[3]  Guy E. Blelloch,et al.  Scans as Primitive Parallel Operations , 1989, ICPP.

[4]  Michael Metcalf,et al.  Fortran 90 Explained , 1990 .

[5]  Guy E. Blelloch,et al.  Collection-oriented languages , 1991 .

[6]  Franck Cappello,et al.  PTAH: Introduction to a New Parallel Architecture for Highly Numeric Processing , 1992, PARLE.

[7]  Eli Upfal,et al.  Parallel hashing: an efficient implementation of shared memory , 1988, JACM.

[8]  Monica S. Lam,et al.  A Loop Transformation Theory and an Algorithm to Maximize Parallelism , 1991, IEEE Trans. Parallel Distributed Syst..

[9]  Michael Wolfe,et al.  More iteration space tiling , 1989, Proceedings of the 1989 ACM/IEEE Conference on Supercomputing (Supercomputing '89).

[10]  Marina C. Chen,et al.  Compiling Communication-Efficient Programs for Massively Parallel Machines , 1991, IEEE Trans. Parallel Distributed Syst..

[11]  W. Daniel Hillis,et al.  The connection machine , 1985 .

[12]  Barbara M. Chapman,et al.  Supercompilers for parallel and vector computers , 1990, ACM Press frontier series.

[13]  J. Ramanujam,et al.  Compile-Time Techniques for Data Distribution in Distributed Memory Machines , 1991, IEEE Trans. Parallel Distributed Syst..