Storage Mapping Optimization for Parallel Programs

Data dependences are known to hamper efficient parallelization of programs. Memory expansion is a general method to remove dependences in assigning distinct memory locations to dependent writes. Parallelization via memory expansion requires both moderation in the expansion degree and efficiency at run-time. We present a general storage mapping optimization framework for imperative programs, applicable to most loop nest parallelization techniques.

[1]  Larry Carter,et al.  Efficient Parallelism via Hierarchical Tiling , 1995, PPSC.

[2]  Paul Feautrier,et al.  Automatic Storage Management for Parallel Programs , 1998, Parallel Comput..

[3]  William Pugh,et al.  Constraint-based array dependence analysis , 1998, TOPL.

[4]  William Pugh,et al.  A practical algorithm for exact array dependence analysis , 1992, CACM.

[5]  Larry Carter,et al.  Schedule-independent storage mapping for loops , 1998, ASPLOS VIII.

[6]  David A. Padua,et al.  Automatic Array Privatization , 1993, Compiler Optimizations for Scalable Parallel Systems Languages.

[7]  Sanjay V. Rajopadhye,et al.  Optimizing memory usage in the polyhedral model , 2000, TOPL.

[8]  Paul Feautrier,et al.  Fuzzy Array Dataflow Analysis , 1997, J. Parallel Distributed Comput..

[9]  P. Feautrier Some Eecient Solutions to the Aane Scheduling Problem Part Ii Multidimensional Time , 1992 .

[10]  Mark N. Wegman,et al.  Efficiently computing static single assignment form and the control dependence graph , 1991, TOPL.

[11]  Robert O. Briggs,et al.  Cuba, communism, and computing , 1992, CACM.

[12]  Monica S. Lam,et al.  Array-data flow analysis and its use in array privatization , 1993, POPL '93.

[13]  Jean-Francois Collard,et al.  The Advantages of Instance-Wise Reaching Definition Analyses in Array (S)SA , 1998, LCPC.

[14]  François Irigoin,et al.  Supernode partitioning , 1988, POPL '88.

[15]  Albert Cohen,et al.  Optimization of storage mappings for parallel programs , 1988 .