Compiling Parallel Loops for High Performance Computers: Partitioning, Data Assignment and Remapping

Communication overhead in multiprocessor systems, as exemplified by cache coherency traffic and global memory access, has a substantial impact on multiprocessor performance. This thesis develops compile-time techniques to reduce the overhead of interprocessor communication for iterative data-parallel loops. These techniques exploit machine-specific information to minimize communication overhead, thus eliminating the need for a user to tune a program for each new multiprocessor. Such techniques are a necessary step toward developing software to support portable parallel programs. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. On many multiprocessors, the location of data in memory may be specified independently of the loop partition. Data placement schemes are presented that minimize communication time. Under the loop partition specified by ADP, global data is partitioned into classes for each processor. Each processor is able to cache certain global data based on its classification. Compilers must frequently evaluate machine-specific tradeoffs between load imbalance and communication. Optimum cyclic partitions are generated for loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns. The CPR (Collective Partitioning and Remapping) algorithm partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative parallel loops.