Strong scaling analysis of a parallel, unstructured, implicit solver and the influence of the operating system interference

PHASTA falls under the category of high-performance scientific computation codes designed for solving partial differential equations (PDEs). Its a massively parallel unstructured, implicit solver with particular emphasis on fluid dynamics (CFD) applications. More specifically, PHASTA is a parallel, hierarchic, adaptive, stabilized, transient analysis code that effectively employs advanced anisotropic adaptive algorithms and numerical models of flow physics. In this paper, we first describe the parallelization of PHASTA’s core algorithms for an implicit solve, where one of our key assumptions is that on a properly balanced supercomputer with appropriate attributes, PHASTA should continue to strongly scale on high core counts until the computational workload per core becomes insufficient and inter-processor communications start to dominate. We then present and analyze PHASTA’s parallel performance across a variety of current near petascale systems, including IBM BG/L, IBM BG/P, Cray XT3, and custom Opteron based supercluster; this selection of systems with inherently different attributes covers a majority of potential candidates for upcoming petascale systems. On one hand, we achieve near perfect (linear) strong scaling out to 32,768 cores of IBM BG/L; showing that a system with desirable attributes will allow implicit solvers to strongly scale on high core counts (including petascale systems). On the contrary, we find that the relative tipping point for strong scaling fundamentally differs among current supercomputer systems. To understand the loss of scaling observed on a particular system (Opteron based supercluster) we analyze the performance and demonstrate that such a loss can be associated to an unbalance in a system attribute; specifically compute-node operating system (OS). In particular, PHASTA scales well to high core counts (up to 32,768 cores) during an implicit solve on systems with compute nodes using lightweight kernels (for example, IBM BG/L); however, we show that on a system where the compute node OS is more heavy weight (e.g., one with background processes) a loss in strong scaling is observed relatively at much fewer number of cores (4,096 cores).

[1]  T. Hughes,et al.  A multi-element group preconditioned GMRES algorithm for nonsymmetric systems arising in finite element analysis , 1989 .

[2]  F. Petrini,et al.  The Case of the Missing Supercomputer Performance: Achieving Optimal Performance on the 8,192 Processors of ASCI Q , 2003, ACM/IEEE SC 2003 Conference (SC'03).

[3]  Michael Mikolajczak,et al.  Designing And Building Parallel Programs: Concepts And Tools For Parallel Software Engineering , 1997, IEEE Concurrency.

[4]  Onkar Sahni,et al.  Adaptive boundary layer meshing for viscous flow simulations , 2008, Engineering with Computers.

[5]  K. Jansen,et al.  A dynamic Smagorinsky model with dynamic determination of the filter width ratio , 2004 .

[6]  J. Lasheras The Biomechanics of Arterial Aneurysms , 2007 .

[7]  V. Venkatakrishnan,et al.  IMPLICIT SCHEMES AND PARALLEL COMPUTING IN UNSTRUCTURED GRID CFD , 1995 .

[8]  Onkar Sahni,et al.  Parallel adaptive simulations on unstructured meshes , 2007 .

[9]  T. Hughes,et al.  Large Eddy Simulation and the variational multiscale method , 2000 .

[10]  Thierry Poinsot,et al.  Highly Parallel Large Eddy Simulations of Multiburner Configurations in Industrial Gas Turbines , 2007 .

[11]  Charles A. Taylor,et al.  Efficient anisotropic adaptive discretization of the cardiovascular system , 2006 .

[12]  K. Jansen Unstructured-grid large-eddy simulation of flow over an airfoil , 1994 .

[13]  K. Jansen,et al.  On the interaction between dynamic model dissipation and numerical dissipation due to streamline upwind/Petrov–Galerkin stabilization , 2005 .

[14]  Kenneth E. Jansen,et al.  Geometry based pre-processor for parallel fluid dynamic simulations using a hierarchical basis , 2008, Engineering with Computers.

[15]  Xiangrong Li,et al.  Anisotropic adaptive finite element method for modelling blood flow , 2005, Computer methods in biomechanics and biomedical engineering.

[16]  William Gropp,et al.  High-performance parallel implicit CFD , 2001, Parallel Comput..

[17]  K. Jansen A stabilized finite element method for computing turbulence , 1999 .

[18]  Kenneth E. Jansen,et al.  A stabilized finite element method for the incompressible Navier–Stokes equations using a hierarchical basis , 2001 .

[19]  S. Dey,et al.  Hierarchical basis for stabilized finite element methods for compressible flows , 2003 .

[20]  Allen D. Malony,et al.  The ghost in the machine: observing the effects of kernel operation on parallel application performance , 2007, Proceedings of the 2007 ACM/IEEE Conference on Supercomputing (SC '07).

[21]  Leonid Oliker,et al.  Scientific Application Performance on Candidate PetaScale Platforms , 2007, 2007 IEEE International Parallel and Distributed Processing Symposium.

[22]  Mark S. Shephard,et al.  Efficient distributed mesh data structure for parallel automated adaptive analysis , 2006, Engineering with Computers.

[23]  K. Jansen,et al.  An evaluation of the variational multiscale model for large-eddy simulation while using a hierarchical basis , 2002 .

[24]  Y. Saad,et al.  GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems , 1986 .

[25]  Laxmikant V. Kalé,et al.  Massively parallel cosmological simulations with ChaNGa , 2008, 2008 IEEE International Symposium on Parallel and Distributed Processing.

[26]  G. Hulbert,et al.  A generalized-α method for integrating the filtered Navier–Stokes equations with a stabilized finite element method , 2000 .

[27]  Vipin Kumar,et al.  Parallel Multilevel k-way Partitioning Scheme for Irregular Graphs , 1996, Proceedings of the 1996 ACM/IEEE Conference on Supercomputing.

[28]  Kenneth E. Jansen,et al.  Spatial test filters for dynamic model large‐eddy simulation with finite elements , 2002 .

[29]  Ibm Blue,et al.  Overview of the IBM Blue Gene/P Project , 2008, IBM J. Res. Dev..

[30]  Marek Behr,et al.  Efficient Parallel Simulations in Support of Medical Device Design , 2007, PARCO.

[31]  K. Jansen,et al.  A parameter-free dynamic subgrid-scale model for large-eddy simulation , 2006 .

[32]  Marsha Berger,et al.  High Resolution Aerospace Applications Using the NASA Columbia Supercomputer , 2005, ACM/IEEE SC 2005 Conference (SC'05).