The antipodes of the class of sequential computers, executing tasks with a single CPU, are the parallel computers containing large numbers of computing nodes. In the shared-memory category, each node has direct access through a switching network to a memory bank, that can be composed of a single but large or multiple but medium sized memory configurations. Opposite to the first category are the distributed memory systems, where each node is given direct access to its own local memory section. Running a program in especially the latter category requires a mechanism that gives access to multiple address spaces, that is, one for each local memory. Transfer of data can only be done from one address space to another. Along with the two categories are the physically distributed, shared-memory systems, that allow the nodes to explore a single globally shared address space. All categories, the performances of which are subject to the way the computing nodes are linked, need either a direct or a switched interconnection network for inter-node communication purposes. Linking nodes and not taking into account the prerequisite of scalability in case of exploiting large numbers of them is not realistic, especially when the applied connection scheme must provide for fast and flexible communications at a reasonable cost. Different network topologies, varying from a single shared bus to a more complex elaboration of a fully connected scheme, and with them the corresponding intricate switching protocols have been extensively explored. A different vision is introduced concerning future prospects of an optically coupled distributed, shared-memory organized multiple-instruction, multiple-data system. In each cluster, an electrical crossbar looks after the interconnections between the nodes, the various memory modules and external I/O channels. The clusters itself are optically coupled through a free space oriented data distributing system. Analogies found in the design of the Convex SPP1000 substantiate the closeness to reality of such an architecture. Subsequently to the preceding introduction also an idealized picture of the fundamental properties of an optically based, fully connected, distributed, (virtual) shared-memory architecture is outlined.
[1]
Hendrik A. Goosen,et al.
Paradigm: a highly scalable shared-memory multicomputer architecture
,
1991,
Computer.
[2]
Joseph Boykin,et al.
Guest Editor's Introduction: Recen Developments in Operating Systems
,
1990
.
[3]
L. Dekker.
Applicability of hybrid simulation
,
1975
.
[4]
Harvard Scott Hinton,et al.
Design of a terabit free-space photonic backplane for parallel computing
,
1995,
Proceedings of Second International Workshop on Massively Parallel Processing Using Optical Interconnections.
[5]
Ralph Duncan,et al.
A Survey of Parallel Computer
,
1990
.
[6]
E.E.E. Frietman.
Opto-Electronic Processing & Networking: A Design Study
,
1995
.
[7]
M. J. Goodwin.
Optical interconnect technologies for high performance electronic processor systems
,
1993
.
[8]
Vipul Gupta,et al.
Architectural Implications of Reconfigurable Optical Interconnects
,
1993,
J. Parallel Distributed Comput..
[9]
R. F. Holt,et al.
Performance of 4-dimensional PANDORA networks
,
1994,
Proceedings of the International Symposium on Parallel Architectures, Algorithms and Networks (ISPAN).
[10]
John H. Reif,et al.
Free space optical message routing for high performance parallel computers
,
1994,
First International Workshop on Massively Parallel Processing Using Optical Interconnections.
[11]
E. Schenfeld.
Proceedings of the Second International Conference on Massively Parallel Processing Using Optical Interconnections, October 23-24, 1995, San Antonio, Texas
,
1995
.
[12]
Jongeling Th. J. M.,et al.
Kaleidoscopic Optical Backplane For Parallel Processing
,
1988,
Other Conferences.
[13]
L. Dekker,et al.
Optical Interconnects Facilitate The Way To Massive Parallelism
,
1988,
Other Conferences.
[14]
Samuel T. Chanson,et al.
Process groups and group communications: classifications and requirements
,
1990,
Computer.