To understand processor performance, it is essential to use metrics that are intuitive, and it is essential to be familiar with a few aspects of a simple scalar pipeline before attempting to understand more complex structures. This paper shows that cycles per instruction (CPI) is a simple dot product of event frequencies and event penalties, and that it is far more intuitive than its more popular cousin, instructions per cycle (IPC). CPI is separable into three components that account for the inherent work, the pipeline, and the memory hierarchy, respectively. Each of these components is a fixed upper limit, or “hard bound,” for the superscalar equivalent components. In the last decade, the memory-hierarchy component has become the most dominant of the three components, and in the next decade, queueing at the memory data bus will become a very significant part of this. In a reaction to this trend, an evolution in bus protocols will ensue. This paper provides a general sketch of those protocols. An underlying theme in this paper is that power constraints have been a driving force in computer architecture since the first computers were built fifty years ago. In CMOS technology, power constraints will shape future microarchitecture in a positive and surprising way. Specifically, a resurgence of the RISC approach is expected in high-performance design which will cause the client and server microarchitectures to converge.
[1]
R. M. Tomasulo,et al.
An efficient algorithm for exploiting multiple arithmetic units
,
1995
.
[2]
John P. Hayes,et al.
Computer Architecture and Organization
,
1980
.
[3]
Philip G. Emma,et al.
Characterization of Branch and Data Dependencies in Programs for Evaluating Pipeline Performance
,
1987,
IEEE Transactions on Computers.
[4]
G. A. Sai-Halasz,et al.
Performance trends in high-end processors
,
1995,
Proc. IEEE.
[5]
Andrew R. Pleszkun,et al.
Implementation of precise interrupts in pipelined processors
,
1985,
ISCA '98.