In scientific programming, the never-ending push to increase fidelity, flops, and physics is hitting a major barrier: scalability. In the context of this paper, we do not mean the run-time scalability of code on processors, but implementation scalability of numbers of people working on a single code. With the kinds of multi-disciplinary, multi-physics, multiresolution applications that are here and on the horizon, it is clear that no single code group — nor any single organization — has all the required expertise or time available to independently create all of the software needed to solve today’s cutting-edge computational problems. Scientific programming libraries have alleviated some of this pressure in the past, but scaling problems are becoming increasingly apparent. The upshot of software libraries has been that different code groups in different organizations can bring their expertise to bear on particular sub-problems. Unfortunately, different groups and different organizations also bring with them implicit dependencies on different software development platforms, different programming languages, and different conceptual models of the problem decomposition — all of which must be resolved if the libraries they produce are to be useful in a final application. The good news is that scientific computing is not alone in these software scalability problems and several industry solutions have proven successful. The bad news
[1]
Scott R. Kohn,et al.
Toward a Common Component Architecture for High-Performance Scientific Computing
,
1999,
HPDC.
[2]
Ralph Johnson,et al.
design patterns elements of reusable object oriented software
,
2019
.
[3]
William Gropp,et al.
Efficient Management of Parallelism in Object-Oriented Numerical Software Libraries
,
1997,
SciTools.
[4]
William Gropp,et al.
Extensible Toolkit for Scientific Computing
,
1996
.
[5]
Edmond Chow,et al.
Design of the HYPRE preconditioner library
,
1998
.