Benchmarking Decision Models for Database Management Systems

Benchmarking is the quantitative method most commonly used when managers contemplate procuring a large business information system. It consists of running a group of representative applications on the systems offered by vendors to validate their claims. The implementation of benchmarking can be very costly, as users need to convert, run, and test applications on several partially compatible computer systems. Benchmarking works well in modern database management systems DBMS-oriented applications because the system performance is more a function of the database structure and activities than of the complexity of the application code. Earlier research focused primarily on designing various benchmarks for database systems; the decision problem associated with finding an optimal mix of benchmarks has largely been overlooked. In this paper, we examine the problem of defining the most economical process for generating and evaluating the appropriate mix of benchmarks to be used across the contending information systems. Our analytical approach considers information-gathering priorities, acquisition and execution costs, resource consumption, and overall time requirements. We present a multiobjective decision-making approach for deriving the optimal mix of benchmarks; this approach reflects the major organizational objectives in more than simple one-dimensional numerical terms. A practical example illustrates the utility of this approach for evaluating a client-server relational database system.

[1]  Carolyn Turbyfill,et al.  AS3AP - A Comparative Relational Database Benchmark , 1989 .

[2]  Pekka Korhonen,et al.  Multiple criteria decision support - A review , 1992 .

[3]  Sang M. Lee,et al.  Goal programming for decision analysis , 1972 .

[4]  Michael Stonebraker,et al.  A measure of transaction processing power , 1985 .

[5]  William E. Perry Data Processing Budgets: How to Develop and Use Budgets Effectively , 1985 .

[6]  Carolyn Turbyfill,et al.  A retrospective on the Wisconsin Benchmark , 1994 .

[7]  Niv Ahituv,et al.  A compumetrical approach for analysis and clustering of computer system performance variables , 1988, Comput. Oper. Res..

[8]  Wan Seon Shin,et al.  Interactive multiple objective optimization: Survey I - continuous case , 1991, Comput. Oper. Res..

[9]  R. G. G. Cattell,et al.  Benchmarking simple database operations , 1987, SIGMOD '87.

[10]  Gabriel R. Bitran,et al.  Theory and algorithms for linear multiple objective programs with zero–one variables , 1979, Math. Program..

[11]  David J. DeWitt,et al.  Benchmarking Database Systems A Systematic Approach , 1983, VLDB.

[12]  Niv Ahituv,et al.  Selecting a job mix for running a benchmark by using an integer programming model , 1978, Comput. Oper. Res..

[13]  H. Jain,et al.  An approach to postoptimality and sensitivity analysis of zero‐one goal programs , 1988 .

[14]  Niv Ahituv,et al.  A model for predicting and evaluating computer resource consumption , 1988, CACM.

[15]  Roger M. Y. Ho,et al.  Goal programming and extensions , 1976 .

[16]  Barry W. Boehm,et al.  Software Engineering Economics , 1993, IEEE Transactions on Software Engineering.

[17]  Ami Arbel,et al.  Capacity planning, benchmarking and evaluation of small computer systems , 1985 .

[18]  David J. DeWitt,et al.  A methodology for database system performance evaluation , 1984, SIGMOD '84.

[19]  Upendra Dave,et al.  Economic and Multiattribute Evaluation of Advanced Manufacturing Systems , 1988 .

[20]  Bijan Fazlollahi,et al.  MCDM Approach for Generating and Evaluating Alternatives in Requirement Analysis , 1991, Inf. Syst. Res..

[21]  Chris F. Kemerer,et al.  Reliability of function points measurement: a field experiment , 2015, CACM.

[22]  Ray Jain,et al.  The art of computer systems performance analysis - techniques for experimental design, measurement, simulation, and modeling , 1991, Wiley professional computing.