Exploiting parallelism in functional languages: a “paradigm-oriented” approach

Deriving parallelism automatically from functional programs is simple in theory but very few practical implementations have been realised. Programs may contain too little or too much parallelism causing a degradation in performance. Such parallelism could be more eeciently controlled if parallel algorithmic structures (or skeletons) are used in the design of algorithms. A structure captures the behaviour of a parallel programming paradigm and acts as a template in the design of an algorithm. This paper presents some important parallel programming paradigms and deenes a structure for each of these paradigms. The iterative transformation paradigm (or geometric parallelism) is discussed in detail and a framework under which programs can be developed and transformed into eecient and portable implementations is presented. 1.1 The \Paradigm-Oriented" Approach In recent years, there has been a steady improvement in the design of high performance parallel computers. However, writing parallel programs is still a complex and expensive task which requires a detailed knowledge of the underlying architecture. It is now argued that functional languages could play an important role in the development of parallel applications. Their implicit parallelism eliminates the need to explicitly decompose a program into concurrent tasks, and to provide the necessary communication and synchronisation between these tasks. Following this principle, a typical parallelising compiler for functional languages would have the structure displayed in Fig. 1.1. Given an arbitrary functional program, the rst phase analyses this program to detect parallelism, and encodes decisions such as evaluation order, partitioning of data, load balancing and granularity control in the form of annotations (e.g. (Burton 1987)). There are two forms of inherent parallelism: horizontal and vertical parallelism (Kelly 1989). Horizontal parallelism evaluates

[1]  Geoffrey L. Burn Overview of a Parallel Reduction Machine Project II , 1989, PARLE.

[2]  WadlerPhilip,et al.  Report on the programming language Haskell , 1992 .

[3]  Thomas Johnsson,et al.  Parallel graph reduction with the (v , G)-machine , 1989, FPCA.

[4]  Simon L. Peyton Jones,et al.  High-Performance parallel graph reduction , 1989, PARLE.

[5]  F. Warren Burton Functional Programming for Concurrent and Distributed Computing , 1987, Comput. J..

[6]  Murray Cole,et al.  Algorithmic skeletons : a structured approach to the management of parallel computation , 1988 .

[7]  Fethi A. Rabhi,et al.  Divide-and-conquer and parallel graph reduction , 1991, Parallel Comput..

[8]  Geoffrey L. Burn,et al.  Overview of a Parallel Reduction Machine Project , 1987, PARLE.

[9]  H. T. Kung,et al.  Computational models for parallel computers , 1988, Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences.

[10]  Udi Manber,et al.  DIB—a distributed implementation of backtracking , 1987, TOPL.

[11]  Paul H. J. Kelly Functional programming for loosely-coupled multiprocessors , 1989, Research monographs in parallel and distributed computing.

[12]  Anthony J. G. Hey,et al.  Experiments in mimd parallelism , 1989, Future Gener. Comput. Syst..

[13]  G. C. Fox,et al.  Solving Problems on Concurrent Processors , 1988 .

[14]  J. Schwarz,et al.  'Paradigm-oriented' Design Of ParallelIterative Programs Using FunctionalLanguages , 1970 .