Fixed Time, Tiered Memory, and Superlinear Speedup

In the problem size-ensemble size plane, fixed-sized and scaled-sized paradigms have been the subsets of primary interest to the parallel processing community. A problem with the newer scaled-sized model is that execution time increases for problems where operation complexity grows faster than storage complexity. The fixed time model is introduced, which, unlike the scaled model, implies the need to reduce problem size per processor. This reduction causes uniprocessor speed to vary. Historical ensemble models hold uniprocessor performance flat as problem size varies, even beyond physical memory size. However, tiered memory can make performance increase instead of decrease as problem size per processor shrinks, and workload can shift to routines with higher speed as the problem is scaled. Superlinear speedup results in such cases. Superlinear speedup, far from being an anomaly, becomes commonplace when the performance model makes realistic assumptions about memory speed and problem scaling.

[1]  Vance Faber,et al.  Superlinear speedup of an efficient sequential algorithm is not possible , 1986, Parallel Comput..

[2]  Robert E. Benner,et al.  Development of Parallel Methods for a $1024$-Processor Hypercube , 1988 .

[3]  DAVID P. HELMBOLD,et al.  Modeling Speedup (n) Greater than n , 1990, IEEE Trans. Parallel Distributed Syst..

[4]  Donald P. Greenberg,et al.  Modeling the interaction of light between diffuse surfaces , 1984, SIGGRAPH.

[5]  John L. Gustafson,et al.  Reevaluating Amdahl's law , 1988, CACM.

[6]  Patrick H. Worley,et al.  The Effect of Time Constraints on Scaled Speedup , 1990, SIAM J. Sci. Comput..

[7]  D. Parkinson Parallel efficiency can be greater than unity , 1986, Parallel Comput..

[8]  Charles E. McDowell,et al.  Modeling Speedup greater than n , 1989, International Conference on Parallel Processing.

[9]  Charles L. Seitz,et al.  The cosmic cube , 1985, CACM.