60 The importance of parallel computing hardly needs emphasis. Many physical problems and abstract models are seriously compute-bound, since sequential computer technology now faces seemingly insurmountable physical limitations. It is widely believed that the only feasible path toward higher performance is to consider radically different computer organizations, in particular ones exploiting parallelism. This argument is indeed rather old now, and considerable progress has been made in the construction of highly parallel computers. One of the simplest and most promising types of parallel machines is the wellknown multiprocessor architecture, a collection of autonomous processors with either shared or distributed memory that are interconnected by a homogeneous communications network and usually communicate by sending messages. The interest in machines of this type is not surprising, since not only do they avoid the classic "von Neumann bottleneck" by being effectively decentralized, but they are also extensible and in general quite easy to build. Indeed, more than a dozen commercial multiprocessors either are now or will soon be available. Although designing and building multiprocessors has proceeded at a dramatic pace, the development ofeffective ways to program them has generally not. This is an unfortunate state of affairs, since experience with sequential machines tells us that software development, not hardware development, is the most critical element in a system's design. The immense complexity of parallel computation can only increase our dependence on software. Clearly we need effective ways to program the new generation of parallel machines. In this article I introducepara-functional programming, a methodology for programming multiprocessor computing systems. It is based on a functional programming model augmented with features that allow programs to be mapped to specific multiprocessor topologies. The most significant aspect of the methodology is that it treats the multiprocessor as a single autonomous computer onto which a program is mapped, rather than as agroup of independent processors that carry out complex communication and require complex synchronization. In more conventional approaches to parallel programming, the latter method of treatment is often manifested as processes that cooperate by message-passing. However, such notions are absent in para-functional programming; indeed, a single language and evaluation model can be used from
[1]
Paul Hudak,et al.
Distributed execution of functional programs using serial combinators
,
1985,
IEEE Transactions on Computers.
[2]
Robert M. Keller,et al.
Simulated Performance of a Reduction-Based Multiprocessor
,
1984,
Computer.
[3]
Robert M. Keller,et al.
Approaching Distributed Database Implementations through Functional Programming Concepts
,
1985,
ICDCS.
[4]
Ehud Shapiro.
Systolic Programming: A Paradigm of Parallel Processing
,
1984,
FGCS.
[5]
Peter Henderson,et al.
Functional programming - application and implementation
,
1980,
Prentice Hall International Series in Computer Science.
[6]
J. Darlington,et al.
Functional programming and its applications
,
1982
.
[7]
F. Warren Burton,et al.
Annotations to Control Parallelism and Reduction Order in the Distributed Evaluation of Functional Programs
,
1984,
TOPL.
[8]
D. A. Turner,et al.
Miranda: A Non-Strict Functional language with Polymorphic Types
,
1985,
FPCA.
[9]
John W. Backus,et al.
Can programming be liberated from the von Neumann style?: a functional style and its algebra of programs
,
1978,
CACM.
[10]
Paul Hudak,et al.
Para-functional programming: a paradigm for programming multiprocessor systems
,
1986,
POPL '86.
[11]
Lennart Augustsson,et al.
A compiler for lazy ML
,
1984,
LFP '84.