A case for direct-mapped caches

Direct-mapped caches are defined, and it is shown that trends toward larger cache sizes and faster hit times favor their use. The arguments are restricted initially to single-level caches in uniprocessors. They are then extended to two-level cache hierarchies. How and when these arguments for caches in uniprocessors apply to caches in multiprocessors are also discussed.<<ETX>>

[1]  Randy H. Katz,et al.  An in-cache address translation mechanism , 1986, ISCA '86.

[2]  R.T. Short,et al.  A simulation study of two-level caches , 1988, [1988] The 15th Annual International Symposium on Computer Architecture. Conference Proceedings.

[3]  Anant Agarwal,et al.  Analysis of cache performance for operating systems and multiprogramming , 1989, The Kluwer international series in engineering and computer science.

[4]  Kimming So,et al.  Cache design of a sub-micron CMOS system/370 , 1987, ISCA '87.

[5]  Gordon Bell,et al.  An Investigation of Alternative Cache Organizations , 1974, IEEE Transactions on Computers.

[6]  Mark Horowitz,et al.  Performance tradeoffs in cache design , 1988, ISCA '88.

[7]  Alan Jay Smith,et al.  A Comparative Study of Set Associative Memory Mapping Algorithms and Their Use for Cache and Main Memory , 1978, IEEE Transactions on Software Engineering.

[8]  Cedell Alexander,et al.  Cache memory performance in a unix enviroment , 1986, CARN.

[9]  Alan Jay Smith,et al.  Line (Block) Size Choice for CPU Cache Memories , 1987, IEEE Transactions on Computers.

[10]  Alan Jay Smith,et al.  Aspects of cache memory and instruction buffer performance , 1987 .

[11]  Alan Jay Smith,et al.  Bibliography and reading on CPU cache memories and related topics , 1986, CARN.