Energy Efficient Runtime Framework for Exascale Systems

Building an Exascale computer that solves scientific problems by three orders of magnitude faster as the current Petascale systems is harder than just making it huge. Towards the first Exascale computer, energy consumption has been emerged to a crucial factor. Every component will have to change to create an Exascale syestem, which capable of a million trillion of computing per second. To run efficiently on these huge systems and to take advantages of every computational power, software and underlying algorithms should be rewritten. While many computing intensive applications are designed to use Message Passing Interface (MPI) with two-sided communication semantics, a Partitioned Global Address Space (PGAS) is being designed, through providing an abstraction of the global address space, to treat a distributed system as if the memory were shared. The data locality and communication could be optimized through the one sided communication offered by PGAS. In this paper we present an energy aware runtime framework, which is PGAS based and offers MPI as a substrate communication layer.

[1]  Pavan Balaji,et al.  Portable, MPI-interoperable coarray fortran , 2014, PPoPP '14.

[2]  Franck Cappello,et al.  Performance comparison of MPI and OpenMP on shared memory multiprocessors , 2006, Concurr. Comput. Pract. Exp..

[3]  Katherine Yelick,et al.  UPC: Distributed Shared-Memory Programming , 2003 .

[4]  Sayantan Sur,et al.  Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters , 2010, 2010 39th International Conference on Parallel Processing.

[5]  José Gracia,et al.  DART-MPI: An MPI-based Implementation of a PGAS Runtime System , 2014, PGAS.

[6]  Laurent Lefèvre,et al.  A Runtime Framework for Energy Efficient HPC Systems without a Priori Knowledge of Applications , 2012, 2012 IEEE 18th International Conference on Parallel and Distributed Systems.

[7]  Abhinav Vishnu,et al.  On the suitability of MPI as a PGAS runtime , 2014, 2014 21st International Conference on High Performance Computing (HiPC).

[8]  José Gracia,et al.  DASH: Data Structures and Algorithms with Support for Hierarchical Locality , 2014, Euro-Par Workshops.

[9]  Shuaiwen Song,et al.  Designing energy efficient communication runtime systems: a view from PGAS models , 2013, The Journal of Supercomputing.

[10]  Sriram Krishnamoorthy,et al.  Supporting the Global Arrays PGAS Model Using MPI One-Sided Communication , 2012, 2012 IEEE 26th International Parallel and Distributed Processing Symposium.

[11]  José Gracia,et al.  Leveraging MPI-3 Shared-Memory Extensions for Efficient PGAS Runtime Systems , 2015, Euro-Par.