The case for application-specific benchmarking

Most performance analysis today uses either microbenchmarks or standard macrobenchmarks (e.g. SPEC, LADDIS, the Andrew benchmark). However, the results of such benchmarks provide little information to indicate how well a particular system will handle a particular application. Such results are, at best, useless and, at worst, misleading. In this paper we argue for an application-directed approach to benchmarking, using performance metrics that reflect the expected behavior of a particular application across a range of hardware or software platforms. We present three different approaches to application-specific measurement, one using vectors that characterize both the underlying system and an application, one using trace-driven techniques, and a hybrid approach. We argue that such techniques should become the new standard.

[1]  Margo I. Seltzer,et al.  Operating system benchmarking in the wake of lmbench: a case study of the performance of NetBSD on the Intel x86 architecture , 1997, SIGMETRICS '97.

[2]  Karl L. Swartz The Brave Little Toaster Meets Usenet , 1996, LISA.

[3]  Bruce E. Keith,et al.  LADDIS: The Next Generation in NFS File Server Benchmarking , 1993, USENIX Summer.

[4]  Carl Staelin,et al.  lmbench: Portable Tools for Performance Analysis , 1996, USENIX Annual Technical Conference.

[5]  J. Howard Et El,et al.  Scale and performance in a distributed file system , 1988 .

[6]  Eugene Miya,et al.  Machine Characterization Based on an Abstract High-level Language Machine , 1990, PERV.

[7]  Margo I. Seltzer,et al.  Web Facts and Fantasy , 1997, USENIX Symposium on Internet Technologies and Systems.

[8]  Alan Jay Smith,et al.  Machine Characterization Based on an Abstract High-Level Language Machine , 1989, IEEE Trans. Computers.

[9]  Diane Tang,et al.  Benchmarking Filesystems , 1995 .

[10]  Alan Jay Smith,et al.  Analysis of benchmark characteristics and benchmark performance prediction , 1996, TOCS.

[11]  Margo I. Seltzer,et al.  A self-scaling and self-configuring benchmark for Web servers (extended abstract) , 1998, SIGMETRICS '98/PERFORMANCE '98.

[12]  Scott Lystig Fritchie The Cyclic News Filesystem: Getting INN To Do More With Less , 1997, LISA.

[13]  Mahadev Satyanarayanan,et al.  Scale and performance in a distributed file system , 1988, TOCS.

[14]  Alan Jay Smith,et al.  Machine Characterization BASed on an Abstract High Level Machine , 1989 .

[15]  Michael D. Smith,et al.  The measured performance of personal computer operating systems , 1996, TOCS.

[16]  J. C. Mogul SPECmarks are leading us astray , 1992, [1992] Proceedings Third Workshop on Workstation Operating Systems.

[17]  Arvin Park,et al.  IOStone: a synthetic file system benchmark , 1990, CARN.