The High-Level Parallel Language ZPL Improves Productivity and Performance

In this paper, we qualitatively address how high-level parallel languages improve productivity and performance. Using ZPL as a case study, we discuss advantages that stem from a language having both a global (rather than a perprocessor) view of the computation and an underlying performance model that statically identifies communication in code. We also candidly discuss several disadvantages to ZPL.

[1]  Ron Cytron,et al.  Doacross: Beyond Vectorization for Multiprocessors , 1986, ICPP.

[2]  Michael Wolfe,et al.  High performance compilers for parallel computing , 1995 .

[3]  Thomas R. Gross,et al.  Decoupling synchronization and data transfer in message passing systems of parallel computers , 1995, ICS '95.

[4]  Bradford L. Chamberlain,et al.  A Compiler Abstraction for Machine Independent Parallel Communication Generation , 1997, LCPC.

[5]  David E. Culler,et al.  Multi Protocol Active Messages on a Cluster of SMP , 1997, ACM/IEEE SC 1997 Conference (SC'97).

[6]  Bradford L. Chamberlain,et al.  ZPL's WYSIWYG performance model , 1998, Proceedings Third International Workshop on High-Level Parallel Programming Models and Supportive Environments.

[7]  Katherine A. Yelick,et al.  Titanium: A High-performance Java Dialect , 1998, Concurr. Pract. Exp..

[8]  Robert W. Numrich,et al.  Co-array Fortran for parallel programming , 1998, FORF.

[9]  Bryan Carpenter,et al.  ARMCI: A Portable Remote Memory Copy Libray for Ditributed Array Libraries and Compiler Run-Time Systems , 1999, IPPS/SPDP Workshops.

[10]  Bin Zhang,et al.  Distributed data clustering can be efficient and exact , 2000, SKDD.

[11]  Katherine Yelick,et al.  Introduction to UPC and Language Specification , 2000 .

[12]  Meichun Hsu,et al.  Accurate Recasting of Parameter Estimation Algorithms Using Sufficient Statistics for Efficient Parallel Speed-Up: Demonstrated for Center-Based Data Clustering Algorithms , 2000, PKDD.

[13]  Steven J. Deitz,et al.  A Comparative Study of the NAS MG Benchmark across Parallel Languages and Architectures , 2000, ACM/IEEE SC 2000 Conference (SC'00).

[14]  Lawrence Snyder,et al.  Pipelining Wavefront Computations: Experiences and Performance , 2000, IPDPS Workshops.

[15]  Jarek Nieplocha,et al.  One-sided communication on the myrinet-based SMP clusters using the GM message-passing library , 2001, Proceedings 15th International Parallel and Distributed Processing Symposium. IPDPS 2001.

[16]  Bradford L. Chamberlain,et al.  Array language support for parallel sparse computation , 2001, ICS '01.

[17]  Bradford L. Chamberlain The design and implementation of a region-based parallel language , 2001 .

[18]  Demetrio Rey,et al.  Porting the parallel array programming language ZPL to an embedded multicomputing system , 2002, APL.

[19]  Steven J. Deitz,et al.  The design and implementation of a parallel array operator for the arbitrary remapping of data , 2003, PPoPP '03.

[20]  Steven J. Deitz Renewed Hope for Data Parallelism: Unintegrated Support for Task Parallelism in ZPL , 2003 .

[21]  Steven J. Deitz,et al.  Abstractions for dynamic data distribution , 2004, Ninth International Workshop on High-Level Parallel Programming Models and Supportive Environments, 2004. Proceedings..