CPMIP: measurements of real computational performance of Earth system models in CMIP6

Abstract. A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).

[1]  V. Balaji,et al.  Climate Computing: The State of Play , 2015, Computing in Science & Engineering.

[2]  Amy Dahan Dalmedico,et al.  History and Epistemology of Models: Meteorology (1946–1963) as a Case Study , 2001 .

[3]  Jeffrey L. Anderson,et al.  The Exchange Grid: A mechanism for data exchange between Earth System components on independent grids , 2006 .

[4]  David H. Bailey,et al.  The Nas Parallel Benchmarks , 1991, Int. J. High Perform. Comput. Appl..

[5]  Franck Cappello,et al.  HPCS 2013 panel: The era of exascale sciences: Challenges, needs and requirements , 2013, HPCS.

[6]  Ecmwf Newsletter,et al.  EUROPEAN CENTRE FOR MEDIUM-RANGE WEATHER FORECASTS , 2004 .

[7]  Anil Deane Parallel computational fluid dynamics : theory and applications : proceedings of the Parallel CFD 2005 Conference , College Park, MD,U.S.A. (May 24-27, 2005) , 2006 .

[8]  Edward N. Lorenz,et al.  SECTION OF PLANETARY SCIENCES: THE PREDICTABILITY OF HYDRODYNAMIC FLOW*,† , 1963 .

[9]  V. Balaji,et al.  PARALLEL NUMERICAL KERNELS FOR CLIMATE MODELS , 2001 .

[10]  Ronald,et al.  GFDL’s ESM2 Global Coupled Climate–Carbon Earth System Models. Part I: Physical Formulation and Baseline Simulation Characteristics , 2012 .

[11]  Leonid Oliker,et al.  Hardware/software co‐design of global cloud system resolving models , 2011 .

[12]  Steve Easterbrook,et al.  The software architecture of climate models: a graphical comparison of CMIP5 and EMICAR5 configurations , 2015 .

[13]  Jack J. Dongarra,et al.  High-performance conjugate-gradient benchmark: A new metric for ranking high-performance computing systems , 2016, Int. J. High Perform. Comput. Appl..

[14]  R. Moss,et al.  Climate model intercomparisons: Preparing for the next phase , 2014 .

[15]  Thomas Lippert,et al.  Trends in supercomputing: The European path to exascale , 2011, Comput. Phys. Commun..

[16]  Venkatramani Balaji,et al.  Coarse-grained component concurrency in Earth system modeling: parallelizing atmospheric radiative transfer in the GFDL AM3 model using the Flexible Modeling System coupling framework , 2016 .

[17]  G. Meehl,et al.  The Coupled Model Intercomparison Project (CMIP) , 2000 .

[18]  Samuel Williams,et al.  Roofline: an insightful visual performance model for multicore architectures , 2009, CACM.

[19]  Gilles Fourestey,et al.  Application Centric Energy-Efficiency Study of Distributed Multi-Core and Hybrid CPU-GPU Systems , 2014, SC14: International Conference for High Performance Computing, Networking, Storage and Analysis.

[20]  Jack J. Dongarra,et al.  The LINPACK Benchmark: An Explanation , 1988, ICS.

[21]  Krista,et al.  GFDL’s ESM2 Global Coupled Climate–Carbon Earth System Models. Part II: Carbon System Formulation and Baseline Simulation Characteristics* , 2013 .

[22]  Mariano Méndez,et al.  Climate Models: Challenges for Fortran Development Tools , 2014, 2014 Second International Workshop on Software Engineering for High Performance Computing in Computational Science and Engineering.

[23]  V. Balaji Scientific computing in the age of complexity , 2013, XRDS.

[24]  Andrew T. Wittenberg,et al.  Impacts on Ocean Heat from Transient Mesoscale Eddies in a Hierarchy of Climate Models , 2015 .

[25]  Sandra Catalán,et al.  Evaluating the performance and energy efficiency of the COSMO-ART model system , 2015, Computer Science - Research and Development.

[26]  Constantine Bekas,et al.  A new energy aware performance metric , 2010, Computer Science - Research and Development.

[27]  Venkatramani Balaji,et al.  Coarse-grained component concurrency in Earth System modeling , 2016 .

[28]  Bryan N. Lawrence,et al.  High-Performance Computing for Climate Modeling , 2014 .

[29]  Graham D. Riley,et al.  Modelling the Earth's climate system: data and computing challenges , 2012, 2012 SC Companion: High Performance Computing, Networking Storage and Analysis.

[30]  Kaivalya M. Dixit,et al.  The SPEC benchmarks , 1991, Parallel Comput..

[31]  Cecelia DeLuca,et al.  Describing Earth system simulations with the Metafor CIM , 2012 .

[32]  Jack Dongarra,et al.  Introduction to the HPCChallenge Benchmark Suite , 2004 .

[33]  Jack J. Dongarra,et al.  Exascale computing and big data , 2015, Commun. ACM.

[34]  Andrew A. Chien,et al.  Moore's Law: The First Ending and a New Beginning , 2013, Computer.