In this paper we analyze major recent trends and changes in the High Performance Computing (HPC) market place. The introduction of vector computers started the area of 'Supercomputing'. The initial success of vector computers in the seventies was driven by raw performance. Massive parallel systems (MPP) became successful in the early nineties due to their better price/performance ratios, which was enabled by the attack of the 'killer-micros'. The success of microprocessor based on the shared memory concept (referred to as symmetric multiprocessors (SMP)) even for the very high-end systems, was the basis for the emerging cluster concepts in the early 2000s. Within the first half of this decade clusters of PC's and workstations have become the prevalent architecture for many HPC application areas on all ranges of performance. However, the Earth Simulator vector system demonstrated that many scientific applications could benefit greatly from other computer architectures. At the same time there is renewed broad interest in the scientific HPC community for new hardware architectures and new programming paradigms. The IBM BlueGene/L system is one early example of a shifting design focus for large-scale system. The DARPA HPCS program has the declared goal of building a Petaflops computer system by the end of the decade using novel computer architectures.
[1]
Marc Snir,et al.
GETTING UP TO SPEED THE FUTURE OF SUPERCOMPUTING
,
2004
.
[2]
B L Buzbee,et al.
Perspectives on Supercomputing
,
1985,
Science.
[3]
Chris R. Jesshope,et al.
Parallel Computers 2: Architecture, Programming and Algorithms
,
1981
.
[4]
Erich Strohmaier,et al.
The marketplace of high-performance computing
,
1999,
Parallel Comput..
[5]
G.E. Moore,et al.
Cramming More Components Onto Integrated Circuits
,
1998,
Proceedings of the IEEE.
[6]
Paul R. Woodward.
Perspectives on Supercomputing: Three Decades of Change
,
1996,
Computer.