Improving disk bandwidth-bound applications through main memory compression

The objective of main memory compression techniques is to reduce the in-memory data size to virtually enlarge the available memory on the system. The main benefit of this technique is the reduction of slow disk I/O operations, thus improving data access latency and saving disk I/O bandwidth. On the other hand, its main drawback is the large amount of CPU power needed by the computationally expensive compression algorithms, that make it unsuitable for medium to large CPU intensive applications. With the proliferation of multicore processors and multi-processor systems, the amount of available CPU power is growing at a fast rate. In this scenario, the number of applications that can transparently benefit from main memory compression can be expanded. Now, not only, single threaded applications, bounded by disk latencies, but also multithreaded ones, bounded by the disk bandwidth can benefit from main memory compression techniques. In this paper we implement and evaluate in the Linux OS a full SMP capable main memory compression subsystem that takes advantage of a current multicore and multiprocessor system to increase the performance of bandwidth sensitive applications like the SPECweb2005 benchmark with promising results.

[1]  Thomas R. Gross,et al.  Adaptive Main Memory Compression , 2005, USENIX Annual Technical Conference, General Track.

[2]  Kunle Olukotun,et al.  Niagara: a 32-way multithreaded Sparc processor , 2005, IEEE Micro.

[3]  Samuel Williams,et al.  The potential of the cell processor for scientific computing , 2005, CF '06.

[4]  P. R. Wilson,et al.  Operating system support for small objects , 1991, Proceedings 1991 International Workshop on Object Orientation in Operating Systems.

[5]  Fred Douglis,et al.  The Compression Cache: Using On-line Compression to Extend Physical Memory , 1993, USENIX Winter.

[6]  Milos Prvulovic,et al.  Improving system performance with compressed memory , 2001, Proceedings 15th International Parallel and Distributed Processing Symposium. IPDPS 2001.

[7]  Toni Cortes,et al.  Improving Application Performance Through Swap Compression , 1999, USENIX Annual Technical Conference, FREENIX Track.

[8]  Kern Koh,et al.  Performance Analysis of On-Chip Cache and Main Memory Compression Systems for High-End Parallel Computers , 2004, PDPTA.

[9]  Alair Pereira do Lago,et al.  Adaptive compressed caching: design and implementation , 2003, Proceedings. 15th Symposium on Computer Architecture and High Performance Computing.

[10]  M. Ekman,et al.  A robust main-memory compression scheme , 2005, 32nd International Symposium on Computer Architecture (ISCA'05).

[11]  Hubertus Franke,et al.  Memory Expansion Technology (MXT): Software support and performance , 2001, IBM J. Res. Dev..

[12]  Michael E. Wazlowski,et al.  Pinnacle: IBM MXT in a Memory Controller Chip , 2001, IEEE Micro.

[13]  Donald S. Fussell,et al.  Compressed caching and modern virtual memory simulation , 1999 .

[14]  David A. Wood,et al.  Adaptive cache compression for high-performance processors , 2004, Proceedings. 31st Annual International Symposium on Computer Architecture, 2004..

[15]  Xiaowei Shen,et al.  Hardware Compressed Main Memory: Operating System Support and Performance Evaluation , 2001, IEEE Trans. Computers.

[16]  Jordi Torres,et al.  A hybrid Web server architecture for e-commerce applications , 2005, 11th International Conference on Parallel and Distributed Systems (ICPADS'05).

[17]  Michael J. Freedman The Compression Cache: Virtual Memory Compression for Handheld Computers , 2000 .