A personal perspective on new parallel environments and, specifically, on dataflow systems is given. The goal of the dataflow effort is to achieve general-purpose parallel computing without subjecting the user to the explicit partitioning of the problem or exposing the user to architectural inadequacies. One aspect is the development of a language that has parallel semantics implicitly built into it. The language, called Id, is layered. At its core is a single-assignment functional language supplemented by an outer layer that allows more general and efficient access to arrays. There are two characteristics of this language that make it desirable. The first is that it is determinate, that is, it guarantees the repeatability of results. The same program with the same input will always produce the same answer. The second is that the language design allows the compiler to find parallelism implicitly in the user's program. Another aspect of the dataflow research considered is the development of a shared-memory architecture that can scale to a large number of processors without degrading the performance of an individual processor. Research results dealing with the issue of resource management are briefly discussed.<<ETX>>
[1]
Gregory M. Papadopoulos,et al.
Implementation of a general purpose dataflow multiprocessor
,
1991
.
[2]
James W. Moore,et al.
Los Alamos experiences with the HEP computer
,
1985
.
[3]
Olaf M. Lubeck,et al.
Supercomputer Performance: The Theory, Practice, and Results
,
1988,
Adv. Comput..
[4]
David E. Culler,et al.
Dataflow architectures
,
1986
.
[5]
Vance Faber,et al.
Modeling the performance of hypercubes: a case study using the particle-in-cell application
,
1988,
Parallel Comput..