The Tera architecture was designed with several ma jor goals in mind. First, it needed to be suitable for very high speed implementations, i. e., admit a short clock period and be scalable to many processors. This goal will be achieved; a maximum configuration of the first implementation of the architecture will have 256 processors, 512 memory units, 256 I/O cache units, 256 I/O processors, and 4096 interconnection network nodes and a clock period less than 3 nanoseconds. The abstract architecture is scalable essentially without limit (although a particular implementation is not, of course). The only requirement is that the number of instruction streams increase more rapidly than the number of physical processors. Although this means that speedup is sublinear in the number of instruction streams, it can still increase linearly with the number of physical pro cessors. The price/performance ratio of the system is unmatched, and puts Tera’s high performance within economic reach. Second, it was important that the architecture be applicable to a wide spectrum of problems. Programs that do not vectoriae well, perhaps because of a preponderance of scalar operations or too-frequent conditional branches, will execute efficiently as long as there is sufficient parallelism to keep the processors busy. Virtually any parallelism available in the total computational workload can be turned into speed, from operation level parallelism within program basic blocks to multiuser timeand space-sharing. The architecture
[1]
Burton J. Smith,et al.
A processor architecture for Horizon
,
1988,
Proceedings. SUPERCOMPUTING '88.
[2]
Burton J. Smith,et al.
The Horizon supercomputing system: architecture and software
,
1988,
Proceedings. SUPERCOMPUTING '88.
[3]
Frank M. Pittelli,et al.
Analysis of a 3D toroidal network for a shared memory architecture
,
1988,
Proceedings. SUPERCOMPUTING '88.
[4]
Ralph Grishman,et al.
The NYU Ultracomputer—Designing an MIMD Shared Memory Parallel Computer
,
1983,
IEEE Transactions on Computers.
[5]
Seppo Linnainmaa,et al.
Software for Doubled-Precision Floating-Point Computations
,
1981,
TOMS.
[6]
Michael J. Flynn,et al.
Some Computer Organizations and Their Effectiveness
,
1972,
IEEE Transactions on Computers.
[7]
T. J. Dekker,et al.
A floating-point technique for extending the available precision
,
1971
.
[8]
Alan Norton,et al.
A Class of Boolean Linear Transformations for Conflict-Free Power-of-Two Stride Access
,
1987,
ICPP.
[9]
A. Gottleib,et al.
The nyu ultracomputer- designing a mimd shared memory parallel computer
,
1983
.