Defining and measuring scalability
暂无分享,去创建一个
The concept of scalability in parallel systems is a simple one: given a reasonable performance on a sample problem, a problem of increased workload can be solved with reasonable performance given a commensurate increase in computational resources. This definition does not afford the analytical precision that is required of any scientific classification system, since the terms of this definition are almost entirely subjective. Some attempts have been made to measure scalability, but many of the popular measurements do not eliminate subjective terms. For example: the fixed-time measurements that have been advanced do not specify a fixed-time constraint, and the scaled-speedup measurements do not specify initial workload. The problem with these measurements is that they depend on a subjective definition of "reasonable performance" and as a result are unreliable. An alternate definition of scalability can be found when scalability is defined as the ability to maintain cost effectiveness as workload grows. When this approach is considered, the subjective "reasonable performance" becomes replaced by an objective term of optimal cost effectiveness. Obviously the success of this approach depends highly on determining a cost effectiveness function that is relevant to scalability. This paper will introduce a cost effectiveness function and argue that the proposed cost effectiveness function is highly relevant to the goals of developing scalable systems.<<ETX>>
[1] G. Amdhal,et al. Validity of the single processor approach to achieving large scale computing capabilities , 1967, AFIPS '67 (Spring).
[2] Xian-He Sun,et al. Toward a better parallel performance metric , 1991, Parallel Comput..
[3] John L. Gustafson,et al. Reevaluating Amdahl's law , 1988, CACM.