In the past, most significant improvements in computer performance have been achieved as a result of advances in simple device technology. The introduction of large scale parallelism at the inter-processor level now represents a viable alternative. However, this method also introduces new difficulties, most notably the conceptual barrier encountered by the user of such a system in coordinating many concurrent activities towards a single goal. Thus, the design and implementation of software systems which can ease this burden is of increasing importance. Such a system must find a good balance between the simplicity of the interface presented and the efficiency with which it can be implemented. This thesis considers existing work in the area and proposes a new approach. The new system presents the user with a selection of independent "algorithmic skeletons", each of which describes the structure of a particular style of algorithm. The user must describe a solution to a problem as an instance of the appropriate skeleton. The implementation task is simplified by the fact that each skeleton may be considered independently, in contrast to the monolithic programming interfaces of existing systems at a similar level of abstraction. The four skeletons presented here are based on the notions of "recursive divide and conquer", "task queues", "iterative combination" and "clustering". Each skeleton is introduced in terms of the abstraction it presents to the user. Implementation on a square grid of autonomous processor-memory pairs is considered. Finally, examples of problems which could be solved in terms of the skeleton are presented. In conclusion, the strengths and weaknesses of the "skeletal" approach are assessed in the context of the existing alternatives.
[1]
Claude Berge,et al.
Programming, games and transportation networks
,
1966
.
[2]
A. Gottleib,et al.
The nyu ultracomputer- designing a mimd shared memory parallel computer
,
1983
.
[3]
Kurt Mehlhorn,et al.
Deterministic Simulation of Idealized Parallel Computers on More Realistic Ones
,
1986,
MFCS.
[4]
C. A. R. Hoare,et al.
Communicating sequential processes
,
1978,
CACM.
[5]
Leslie G. Valiant,et al.
Universal schemes for parallel communication
,
1981,
STOC '81.
[6]
Harry F. Jordan.
HEP architecture, programming and performance
,
1985
.
[7]
Peter M. Kogge.
Function-based computing and parallelism: A review
,
1985,
Parallel Comput..
[8]
Charles L. Seitz,et al.
The cosmic cube
,
1985,
CACM.
[9]
Geoff P. McKeown,et al.
A Special Purpose MIMD Parallel Processor
,
1985,
Inf. Process. Lett..
[10]
Harold S. Stone,et al.
Parallel Processing with the Perfect Shuffle
,
1971,
IEEE Transactions on Computers.
[11]
John W. Backus,et al.
Can programming be liberated from the von Neumann style?: a functional style and its algebra of programs
,
1978,
CACM.
[12]
Nicholas Carriero,et al.
Linda and Friends
,
1986,
Computer.
[13]
Uzi Vishkin,et al.
A Parallel-Design Distributed-Implementation (PDDI) General-Purpose Computer
,
2011,
Theor. Comput. Sci..
[14]
Ian Watson,et al.
The Manchester prototype dataflow computer
,
1985,
CACM.
[15]
Carla D. Savage.
A Systolic Data Structure Chip for Connectivity Problems
,
1981
.