Scalability in Parallel Processing

The objective of this chapter is to discuss the notion of scalability. We start by explaining the notion with an emphasis on modern (and future) large scale parallel platforms. We also review the classical metrics used for estimating the scalability of a parallel platform, namely, speed-up, efficiency and asymptotic analysis. We continue with the presentation of two fundamental laws of scalability: Amdahl’s and Gustafson’s laws. Our presentation considers the original arguments of the authors and reexamines their applicability in today’s machines and computational problems. Then, the chapter discusses more advanced topics that cover the evolution of computing fields (in term of problems), modern resource sharing techniques and the more specific issue of reducing energy consumption. The chapter ends with a presentation of a statistical approach to the design of scalable algorithms. The approach describes how scalable algorithms can be designed by using a “cooperation” of several parallel algorithms solving the same problem. The construction of such cooperations is particularly interesting while solving hard combinatorial problems. We provide an illustration of this last point on the classical satisfiability problem SAT.

[1]  Vipin Kumar,et al.  Isoefficiency: measuring the scalability of parallel algorithms and architectures , 1993, IEEE Parallel & Distributed Technology: Systems & Applications.

[2]  Moshe Y. Vardi Moore's law and the sand-heap paradox , 2014, CACM.

[3]  Denis Trystram,et al.  Malleable resource sharing algorithms for cooperative resolution of problems , 2012, 2012 IEEE Congress on Evolutionary Computation.

[4]  James Demmel,et al.  the Parallel Computing Landscape , 2022 .

[5]  Walter Tichy Auto-tuning parallel software: an interview with Thomas Fahringer: the multicore transformation (Ubiquity symposium) , 2014, UBIQ.

[6]  Lizy Kurian John,et al.  Run-time modeling and estimation of operating system power consumption , 2003, SIGMETRICS '03.

[7]  Yuefan Deng,et al.  New trends in high performance computing , 2001, Parallel Computing.

[8]  Tad Hogg,et al.  An Economics Approach to Hard Computational Problems , 1997, Science.

[9]  Sally A. McKee,et al.  Hitting the memory wall: implications of the obvious , 1995, CARN.

[10]  Randy H. Katz,et al.  Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center , 2011, NSDI.

[11]  G. Amdhal,et al.  Validity of the single processor approach to achieving large scale computing capabilities , 1967, AFIPS '67 (Spring).

[12]  Stephen Berard,et al.  Implications of Historical Trends in the Electrical Efficiency of Computing , 2011, IEEE Annals of the History of Computing.

[13]  H. James Hoover,et al.  Limits to Parallel Computation: P-Completeness Theory , 1995 .

[14]  Steven Fortune,et al.  Parallelism in random access machines , 1978, STOC.

[15]  Nicholas Pippenger,et al.  On simultaneous resource bounds , 1979, 20th Annual Symposium on Foundations of Computer Science (sfcs 1979).

[16]  Richard E. Brown,et al.  Report to Congress on Server and Data Center Energy Efficiency: Public Law 109-431 , 2008 .

[17]  Bich C. Le,et al.  An out-of-order execution technique for runtime binary translators , 1998, ASPLOS VIII.

[18]  Hsien-Hsin S. Lee,et al.  Extending Amdahl's Law for Energy-Efficient Computing in the Many-Core Era , 2008, Computer.

[19]  Jie Wu,et al.  NSF/IEEE-TCPP curriculum initiative on parallel and distributed computing: core topics for undergraduates , 2011, SIGCSE '11.

[20]  John L. Gustafson,et al.  Reevaluating Amdahl's law , 1988, CACM.

[21]  David S. Johnson,et al.  Computers and Intractability: A Guide to the Theory of NP-Completeness , 1978 .

[22]  John R. Rice,et al.  The Algorithm Selection Problem , 1976, Adv. Comput..

[23]  Marin Bougeret,et al.  Approximating the Discrete Resource Sharing Scheduling Problem , 2011, Int. J. Found. Comput. Sci..