The consequences of fixed time performance measurement

In measuring the performance of parallel computers, the usual method is to choose a problem and test the execution time as the processor count is varied. This model underlies definitions of 'speedup,' 'efficiency,' and arguments against parallel processing such as Ware's (1972) formulation of Amdahl's law (1967). Fixed time models use problem size as the figure of merit. Analysis and experiments based on fixed time instead of fixed size have yielded surprising consequences: the fixed time method does not reward slower processors with higher speedup; it predicts a new limit to speedup, which is more optimistic than Amdahl's; it shows an efficiency which is independent of processor speed and ensemble size; it sometimes gives non-spurious superlinear speedup; it provides a practical means (the SLALOM benchmark) of comparing computers of widely varying speeds without distortion.<<ETX>>

[1]  D. Parkinson Parallel efficiency can be greater than unity , 1986, Parallel Comput..

[2]  Jack J. Dongarra Performance of various computers using standard linear equations software in a Fortran environment , 1983, CARN.

[3]  Donald P. Greenberg,et al.  Modeling the interaction of light between diffuse surfaces , 1984, SIGGRAPH.

[4]  Willis H. Ware,et al.  The ultimate computer , 1972, IEEE Spectrum.

[5]  DAVID P. HELMBOLD,et al.  Modeling Speedup (n) Greater than n , 1990, IEEE Trans. Parallel Distributed Syst..

[6]  John W. Sheldon,et al.  The IBM card-programmed electronic calculator , 1951, AIEE-IRE '51.

[7]  David H. Bailey,et al.  The Nas Parallel Benchmarks , 1991, Int. J. High Perform. Comput. Appl..

[8]  Lynn Pointer Perfect: performance evaluation for cost effective transformations report 2 , 1990 .

[9]  John L. Gustafson,et al.  When "Grain Size" Doesn't Matter , 1991, The Sixth Distributed Memory Computing Conference, 1991. Proceedings.

[10]  F. H. Mcmahon,et al.  The Livermore Fortran Kernels: A Computer Test of the Numerical Performance Range , 1986 .

[11]  Vance Faber,et al.  Superlinear speedup of an efficient sequential algorithm is not possible , 1986, Parallel Comput..

[12]  John L. Gustafson,et al.  Fixed Time, Tiered Memory, and Superlinear Speedup , 1990, Proceedings of the Fifth Distributed Memory Computing Conference, 1990..

[13]  Charles L. Seitz,et al.  The cosmic cube , 1985, CACM.

[14]  Xian-He Sun Parallel computation models: representation, analysis and applications , 1991 .

[15]  Xian-He Sun,et al.  Toward a better parallel performance metric , 1991, Parallel Comput..

[16]  John V. Atanasoff,et al.  Computing Machine for the Solution of large Systems of Linear Algebraic Equations , 1982 .

[17]  Brian A. Wichmann,et al.  A Synthetic Benchmark , 1976, Comput. J..

[18]  Charles E. McDowell,et al.  Modeling Speedup greater than n , 1989, International Conference on Parallel Processing.

[19]  Patrick H. Worley,et al.  The Effect of Time Constraints on Scaled Speedup , 1990, SIAM J. Sci. Comput..

[20]  John L. Gustafson,et al.  Reevaluating Amdahl's law , 1988, CACM.

[21]  Reinhold Weicker,et al.  Dhrystone: a synthetic systems programming benchmark , 1984, CACM.

[22]  Lionel M. Ni,et al.  Another view on parallel speedup , 1990, Proceedings SUPERCOMPUTING '90.

[23]  G. R. Withers,et al.  Computing performance as a function of the speed, quantity, and cost of the processors , 1989, Proceedings of the 1989 ACM/IEEE Conference on Supercomputing (Supercomputing '89).

[24]  Edward D. Lazowska,et al.  Speedup Versus Efficiency in Parallel Systems , 1989, IEEE Trans. Computers.

[25]  Alan H. Karp,et al.  Measuring parallel processor performance , 1990, CACM.

[26]  G. Amdhal,et al.  Validity of the single processor approach to achieving large scale computing capabilities , 1967, AFIPS '67 (Spring).

[27]  Robert E. Benner,et al.  Development of Parallel Methods for a $1024$-Processor Hypercube , 1988 .

[28]  F. Alt A Bell Telephone Laboratories’ Computing Machine—I , 1948 .

[29]  Frederic A. Van-Catledge Toward a General Model for Evaluating the Relative Performance of Computer Systems , 1989, Int. J. High Perform. Comput. Appl..