One of the most widely researched areas in operating systems is filesystem design, implementation, and performance. Almost all of the research involves reporting performance numbers gathered from a variety of different benchmarks. The problem with such results is that existing filesystem benchmarks are inadequate, suffering from problems ranging from not scaling with advancing technology to not measuring the filesystem. A new approach to filesystem benchmarking is presented here. This methodology is designed both to help system designers understand and improve existing systems and to help users decide which filesystem to buy or run. For usability, the benchmark is separated into two parts: a suite of micro-benchmarks, which is actually run on the filesystem, and a workload characterizer. The results from the two separate parts can be combined to predict the performance of the filesystem on the workload. The purpose for this separation of functionality is two-fold. First, many system designers would like their filesystem to perform well under diverse workloads: by characterizing the workload independently, the designers can better understand what is required of the filesystem. The micro-benchmarks tell the designer what needs to be improved while the workload characterizer tells the designer whether that improvement will affect filesystem performance under that workload. This separation also helps users trying to decide which system to run or buy, who may not be able to run their workload on all systems under consideration, and therefore need this separation. The implementation of this methodology does not suffer from many of the problems seen in existing benchmarks: it scales with technology, it is tightly specified, and it helps system designers. This benchmark’s only drawbacks are that it does not accurately predict the performance of a filesystem on a workload, thus limiting its applicability: it is useful to system designers, but not for users trying to decide which system to buy. The belief is that the general approach will work, given additional time to manipulate the prediction algorithm.
[1]
Michael K. Molloy.
Anatomy of the NHFSSTONES benchmarks
,
1992,
PERV.
[2]
David A. Patterson,et al.
A new approach to I/O performance evaluation: self-scaling I/O benchmarks, predicted I/O performance
,
1993,
SIGMETRICS '93.
[3]
Dionisios N. Pnevmatikatos,et al.
Cache performance of the SPEC92 benchmark suite
,
1993,
IEEE Micro.
[4]
Margo I. Seltzer,et al.
File System Performance and Transaction Support
,
1992
.
[5]
Mendel Rosenblum,et al.
The design and implementation of a log-structured file system
,
1991,
SOSP '91.
[6]
Irene Hu,et al.
Measuring file access patterns in UNIX
,
1986,
PERV.
[7]
Alan Jay Smith,et al.
Sequentiality and prefetching in database systems
,
1978,
TODS.
[8]
R. S. Fabry,et al.
A fast file system for UNIX
,
1984,
TOCS.
[9]
J. Howard Et El,et al.
Scale and performance in a distributed file system
,
1988
.
[10]
John A. Kunze,et al.
A trace-driven analysis of the UNIX 4.2 BSD file system
,
1985,
SOSP '85.
[11]
Bruce E. Keith,et al.
LADDIS: The Next Generation in NFS File Server Benchmarking
,
1993,
USENIX Summer.
[12]
Jim Gray,et al.
Benchmark Handbook: For Database and Transaction Processing Systems
,
1992
.
[13]
Maria Ebling,et al.
SynRGen: an extensible file reference generator
,
1994,
SIGMETRICS.
[14]
Mary Baker,et al.
Measurements of a distributed file system
,
1991,
SOSP '91.
[15]
Arvin Park,et al.
IOStone: a synthetic file system benchmark
,
1990,
CARN.
[16]
Alan Jay Smith.
Analysis of Long Term File Reference Patterns for Application to File Migration Algorithms
,
1981,
IEEE Transactions on Software Engineering.
[17]
Margo I. Seltzer,et al.
Heuristic Cleaning Algorithms in Log-Structured File Systems
,
1995,
USENIX.
[18]
Michael Stonebraker,et al.
A measure of transaction processing power
,
1985
.