Since CPUs hit the power-wall earlier this decade single threaded CPU performance has been increasing at a much lower pace than it historically used to in earlier decades1. The recent trend in hardware is to go multi-core and multi-threaded for more performance instead. Multi-core means that the CPU package has more than one CPU core inside and acts like multiple CPUs. Multi threaded CPUs are using multiple virtual CPUs inside each CPU core to use execution resources more efficiently. Also larger systems have always used multiple CPU packages for better performance. Exploiting the performance potential of these multiple CPUs 2 requires software improvements to run parallel. Traditionally only larger super computers and servers needed major software scalability work, because they have been using many CPU sockets, while cheaper systems only had a low number of CPUs (one or perhaps two) so that major scalability work was not needed. But since individual CPUs are getting more and more cores and threads this is changing and even relatively low end systems require extensive scalability work now. The following table has some current example systems, showing these trends.
[1]
G. Amdhal,et al.
Validity of the single processor approach to achieving large scale computing capabilities
,
1967,
AFIPS '67 (Spring).
[2]
John L. Gustafson,et al.
Reevaluating Amdahl's law
,
1988,
CACM.
[3]
Curt Schimmel.
UNIX systems for modern architectures - symmetric multiprocessing and caching for Kernel programmers
,
1994,
Addison-Wesley professional computing series.
[4]
Uresh K. Vahalia.
UNIX Internals: The New Frontiers
,
1995
.
[5]
Paul E. McKenney,et al.
Scaling dcache with RCU
,
2004
.