Programming at Exascale: Challenges and Innovations

Supercomputers become faster as hardware and software technologies continue to evolve. Current supercomputers are capable of 1015 floating point operations per second (FLOPS) that called Petascale system. The High Performance Computer (HPC) community is Looking forward to the system with capability of 1018 (FLOPS) that is called Exascale. Having a system to thousand times faster than the previous one produces challenges to the high performance computer (HPC) community. These challenges require innovation in software and hardware. In this paper, the challenges posed for programming at Exascale systems are reviewed and the developments in the main programming models and systems are surveyed.

[1]  Alfonso Niño,et al.  A Survey of Parallel Programming Models and Tools in the Multi and Many-core Era , 2022 .

[2]  Vivek Sarkar,et al.  Software challenges in extreme scale systems , 2009 .

[3]  Martin Wimmer Programming models for parallel computing , 2010 .

[4]  Jie Cheng,et al.  Programming Massively Parallel Processors. A Hands-on Approach , 2010, Scalable Comput. Pract. Exp..

[5]  Hee-Seok Kim,et al.  Transitioning HPC software to exascale heterogeneous computing , 2015, 2015 Computational Electromagnetics International Workshop (CEM).

[6]  Margaret H. Wright,et al.  The opportunities and challenges of exascale computing , 2010 .

[7]  David Kaeli,et al.  Introduction to Parallel Programming , 2013 .

[8]  Jack Dongarra,et al.  Applied Mathematics Research for Exascale Computing , 2014 .

[9]  Bronis R. de Supinski,et al.  OpenMP for Accelerators , 2011, IWOMP.

[10]  Dana Petcu,et al.  Exascale Machines Require New Programming Paradigms and Runtimes , 2015, Supercomput. Front. Innov..

[11]  Hans-Wolfgang Loidl,et al.  A Survey of High-Level Parallel Programming Models , 2013 .

[12]  R Springmeyer,et al.  From Petascale to Exascale: Eight Focus Areas of R&D Challenges for HPC Simulation Environments , 2011 .

[13]  Franck Cappello,et al.  Toward Exascale Resilience: 2014 update , 2014, Supercomput. Front. Innov..

[14]  William Gropp,et al.  Programming for Exascale Computers , 2013, Computing in Science & Engineering.

[15]  Katherine Yelick,et al.  UPC: Distributed Shared-Memory Programming , 2003 .

[16]  John Shalf,et al.  The International Exascale Software Project roadmap , 2011, Int. J. High Perform. Comput. Appl..

[17]  Juan Gómez-Luna,et al.  A programming system for future proofing performance critical libraries , 2016, PPoPP.

[18]  Stefan Marr,et al.  Partitioned Global Address Space Languages , 2015, ACM Comput. Surv..

[19]  Jack Dongarra,et al.  Sourcebook of parallel computing , 2003 .

[20]  Dirk Schmidl,et al.  First Experiences with Intel Cluster OpenMP , 2008, IWOMP.

[21]  Sadaf R. Alam,et al.  DARPA's HPCS Program- History, Models, Tools, Languages , 2008, Adv. Comput..

[22]  Alistair P. Rendell,et al.  Predicting Performance of Intel Cluster OpenMP with Code Analysis Method , 2008 .

[23]  Business Media Llc Encyclopedia of Parallel Computing Book Copyright - Year , 2011 .

[24]  Message Passing Interface Forum MPI: A message - passing interface standard , 1994 .

[25]  Ade Miller,et al.  C++ AMP: Accelerated Massive Parallelism with Microsoft Visual C++ , 2012 .

[26]  Graph Topology MPI at Exascale , 2010 .

[27]  Pedro C. Diniz Exascale Programming Challenges , 2011 .

[28]  Wen-mei W. Hwu,et al.  Tangram: a High-level Language for Performance Portable Code Synthesis , 2015 .

[29]  Xing Guo,et al.  Parallel Computation of Aerial Target Reflection of Background Infrared Radiation: Performance Comparison of OpenMP, OpenACC, and CUDA Implementations , 2016, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.