A Co-Design Study Of Fusion Whole Device Modeling Using Code Coupling

Complex workflows consisting of multiple simulation and analysis codes running concurrently through in-memory coupling is becoming popular due to inherent advantages in online management of large-scale data, resilience, and the code development process. However, orchestrating such a multi-application workflow to efficiently utilize resources on a heterogeneous architecture is challenging. In this paper, we present our results with running the Fusion Whole Device Modeling benchmark workflow on Summit, a pre-exascale supercomputer at Oak Ridge National Laboratory. We explore various resource distribution and process placement mechanisms, including sharing compute nodes between processes from separate applications. We show that fine-grained process placement can have a significant impact towards efficient utilization of the compute power of a node on Summit, and conclude that sophisticated tools for performing co-design studies of multi-application workflows can play an important role towards efficient orchestration of such workflows.

[1]  R. Aymar,et al.  The ITER design , 2002 .

[2]  J. Choi,et al.  A tight-coupling scheme sharing minimum information across a spatial interface between gyrokinetic turbulence codes , 2018, Physics of Plasmas.

[3]  G. Merlo,et al.  Cross-verification of the global gyrokinetic codes GENE and XGC , 2018, Physics of Plasmas.

[4]  John Shalf,et al.  Rethinking Hardware-Software Codesign for Exascale Systems , 2011, Computer.

[5]  Farrokh Mistree,et al.  Integrated Design of Multiscale, Multifunctional Materials and Products , 2009 .

[6]  Karsten Schwan,et al.  Just in time: adding value to the IO pipelines of high performance applications with JITStaging , 2011, HPDC '11.

[7]  Philip W. Jones,et al.  The DOE E3SM Coupled Model Version 1: Overview and Evaluation at Standard Resolution , 2019, Journal of Advances in Modeling Earth Systems.

[8]  Ying Wai Li,et al.  QMCPACK: an open source ab initio quantum Monte Carlo package for the electronic structure of atoms, molecules and solids , 2018, Journal of physics. Condensed matter : an Institute of Physics journal.

[9]  Karsten Schwan,et al.  PreDatA – preparatory data analytics on peta-scale machines , 2010, 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS).

[10]  Ian T. Foster Computing Just What You Need: Online Data Analysis and Reduction at Extreme Scales , 2017, HiPC.

[11]  Choong-Seock Chang,et al.  Full-f gyrokinetic particle simulation of centrally heated global ITG turbulence from magnetic axis to edge pedestal top in a realistic tokamak geometry , 2009 .

[12]  Karsten Schwan,et al.  DataStager: scalable data staging services for petascale applications , 2009, HPDC '09.

[13]  Dong Li,et al.  Exploring hybrid memory for GPU energy efficiency through software-hardware co-design , 2013, Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques.

[14]  Aya Hagishima,et al.  An approach for coupled simulation of building thermal effects and urban climatology , 2004 .

[15]  F. Sartori,et al.  The Joint European Torus , 2006, IEEE Control Systems.

[16]  Jonathan Hines,et al.  Stepping up to Summit , 2018, Comput. Sci. Eng..

[17]  Charlson C. Kim,et al.  Large-scale gyrokinetic turbulence simulations: Effects of profile variation* , 1999 .

[18]  J. L. Luxon,et al.  A design retrospective of the DIII-D tokamak , 2002 .

[19]  Arie Shoshani,et al.  Hello ADIOS: the challenges and lessons of developing leadership class I/O frameworks , 2014, Concurr. Comput. Pract. Exp..

[20]  Berk Hess,et al.  GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers , 2015 .

[21]  Yong-Seok Hwang,et al.  Design and construction of the KSTAR tokamak , 2001 .

[22]  Franck Cappello,et al.  Coupling Exascale Multiphysics Applications: Methods and Lessons Learned , 2018, 2018 IEEE 14th International Conference on e-Science (e-Science).