An Energy Efficient L 2 Cache Architecture using Way Tag Information under Write-Through Policy

To perform operations the processor has to fetch the instructions from the memory in this process the time taken to fetch the instructions from the larger memory block (main memory) is more i.e., it does not reach the processor speed of execution, to decrease the gap between the processor speed of execution and data fetching of the processor we go for the cache memory. The cache memory performance is the most significant factor in achieving high processor performance because cache memory is the very small in size than the main memory it will helpful in fetching the data very fastly which increase the performance of the processor if the data is not present in the cache memory then it fetch the data from the main memory and stored in the cache memory. The cache memory work on the principle of locality. Cache works by storing a small subset of the external memory contents, typically out of its original order. Data and instructions that are being used frequently, such as a data array or a small instruction loop, are stored in the cache and can be read quickly without having to access the main memory. Cache runs at the same speed as the rest of the processor, which is typically much faster than the external RAM operates at. This means that if data is in the cache, accessing it is faster than accessing memory. In this paper we are going to increase the performance of the processor by a new policy called writethrough and a new cache architecture referred to as waytagged. In this way-tagged process we are having L1 cache and L2 cache and the address at which the data have to be stored is divided into three parts tag, index and offset address and the data which is going to be stored in the L1 & L2 caches are stored with reference with the tag address and the copy of the tag address is stored in the way-tag array. Way-tag array is an array where the way-tag address of the data is stored. When the processor required the data to perform the required operations first it check the L1 cache and if the data is present in the L1 cache it fetches the data other wise it check the L2 cache for the data and similarly if the data is not present in L2 cache the processor checks the data in the main memory .while processor fetching the data from the main memory it stores the data in the L2 & L1 cache respectively and stores the way-tag address in the respectively L1 and L2 way-tag arrays. By this process of way-tag we are going to increase the performance of the processor than the previous cache process. Simulation results on the ModelSim and synthesis results on Xilinx demonstrate that the proposed technique achieves total power saving of 56.42% and dynamic power saving of 41.31% in L2 caches on average with small area overhead and no performance degradation. Furthermore, the idea of way tagging can be applied to existing low-power cache design techniques to further improve energy efficiency.

[1]  Tohru Ishihara,et al.  A way memoization technique for reducing power consumption of caches in application specific integrated processors , 2005, Design, Automation and Test in Europe.

[2]  Dirk Grunwald,et al.  Predictive sequential associative cache , 1996, Proceedings. Second International Symposium on High-Performance Computer Architecture.

[3]  Albert Ma,et al.  Way Memoization to Reduce Fetch Energy in Instruction Caches , 2001 .

[4]  T. N. Vijaykumar,et al.  Reactive-associative caches , 2001, Proceedings 2001 International Conference on Parallel Architectures and Compilation Techniques.

[5]  Frank Vahid,et al.  A highly configurable cache architecture for embedded systems , 2003, 30th Annual International Symposium on Computer Architecture, 2003. Proceedings..

[6]  Wen-Ben Jone,et al.  Phased tag cache: an efficient low power cache system , 2004, 2004 IEEE International Symposium on Circuits and Systems (IEEE Cat. No.04CH37512).

[7]  Lei Wang,et al.  Way-tagged cache: an energy-efficient L2 cache architecture under write-through policy , 2009, ISLPED.

[8]  Mehdi Baradaran Tahoori,et al.  Balancing Performance and Reliability in the Memory Hierarchy , 2005, IEEE International Symposium on Performance Analysis of Systems and Software, 2005. ISPASS 2005..

[9]  Alvin M. Despain,et al.  Cache design trade-offs for power and performance optimization: a case study , 1995, ISLPED '95.

[10]  Kazuaki Murakami,et al.  Way-predicting set-associative cache for high performance and low energy consumption , 1999, Proceedings. 1999 International Symposium on Low Power Electronics and Design (Cat. No.99TH8477).

[11]  Wen-Ben Jone,et al.  Location cache: a low-power L2 cache system , 2004, Proceedings of the 2004 International Symposium on Low Power Electronics and Design (IEEE Cat. No.04TH8758).