Thoughts on massively-parallel heterogeneous computing for solving large problems

In this paper, we present our view of massively-parallel heterogeneous computing for solving large scientific problems. We start by observing that computing has been the primary driver of major innovations since the beginning of the 21st century. We argue that this is the fruit of decades of progress in computing methods, technology, and systems. A high-level analysis on out-scaling and up-scaling on large supercomputers is given through a time-domain wave-scattering simulation example. The importance of heterogeneous node architectures for good up-scaling is highlighted. A case for low-complexity algorithms is made for continued scale-out towards exascale systems.

[1]  Ignace Bogaert,et al.  Full-Wave Simulations of Electromagnetic Scattering Problems With Billions of Unknowns , 2015, IEEE Transactions on Antennas and Propagation.

[2]  Weng Cho Chew,et al.  Large inverse-scattering solutions with DBIM on GPU-enabled supercomputers , 2017, 2017 International Applied Computational Electromagnetics Society Symposium - Italy (ACES).

[3]  Luis Landesa,et al.  Successes and frustrations in the solution of large electromagnetic problems in supercomputers , 2017, 2017 International Applied Computational Electromagnetics Society Symposium - Italy (ACES).