Instruction Sets for Parallel Random Access Machines

Abstract : An important model of parallel computation is the Parallel Random Access Machine (PRAM), which comprises multiple processors that execute instructions synchronously and share a common memory. Formalized by Fortune and Wyllie (1978) and Goldschlager (1982), the PRAM is a much more natural model of parallel computation than older models such as combinational circuits and alternating Turing machines (Ruzzo, 1981) because the PRAM abstracts the salient features of a modern multiprocessor computer. Eventually an algorithm developed for the PRAM can be implemented on a parallel network computer such as a mesh- connected array computer (Thompson and Kung, 1977), a hypercube machine (Seitz, 1985), a cube-connected cycles machine (Preparata and Vuillemin, 1981) or a bounded degree processor network (Alt et al., 1987); on all network computers the routing of data complicates the implementation of algorithms. The PRAM provides the foundation for the design of highly parallel algorithms (Luby), 1986; Miller and Reif, 1985; among many others). This model permits the exposure of the intrinsic parallelism in a computational problem because it simplifies the communication of data through a shared memory. To quantify differences in computational in computational performance, we determine the time complexities of simulations between PRAMS with different instruction sets. We focus on the computational complexity of simulations between PRAMs with the following operations: multiplication, division, arbitrary left shift, Arbitrary right shift, and probabilistic choice.