Compiling for locality of reference
暂无分享,去创建一个
Parallel computers provide a large degree of computational power for programmers who are willing and able to harness it. The introduction of high-level languages and good compilers made possible the wide use of sequential machines but the lack of such tools for parallel machines hinders their widespread acceptance and use. Programmers must address issues such as process decomposition, synchronization, and load balancing. This is a severe burden and opens the door to time-dependent bugs, such as race conditions between reads and writes which are extremely difficult to detect. In this thesis, we use compile--time analysis and automatic restructuring of programs to exploit a two--level memory hierarchy. Many multiprocessor architectures can be modelled as two-level memory hierarchies, including message-passing machines such as the Intel iPSC/2. We show that such an approach can exploit data locality while avoiding the overhead associated with run-time coherence management. At the same time, it relieves the programmer from the burden of managing process decomposition and synchronization by automatically performing these tasks. We have developed a parallelizing compiler which, given a sequential program and a memory layout of its data, performs process decomposition while balancing parallelism against locality of reference. A process decomposition is obtained by specializing the program, for each processor, to the data that resides on that processor. If this analysis fails, the compiler falls back to a simple but inefficient scheme called run-time resolution. Each process''s role in the computation is determined by examining the data required for execution at run-time. Thus, our approach to process decomposition is ``data-driven'''' rather than ``program-driven.'''' We discuss several message optimizations that address the issues of overhead and synchronization in message transmission. Accumulation reorganizes the computation of a commutative and associative operator to reduce message traffic. Pipelining sends a value as close to its computation as possible to increase parallelism. Vectorization of messages combines messages with the same source and the same destination to reduce overhead. Our results from experiments in parallelizing SIMPLE, a large hydrodynamics benchmark, for the Intel iPSC/2, show a speed-up within sixty to seventy percent of hand-written code.