Impacts of Current Hardware and Software Developments on Simulation Sciences
暂无分享,去创建一个
For more than a decade single compute core performance is no longer doubling every 18-24 months. Physical limitations due to successive chip miniaturisation are visible and it becomes increasingly difficult to dissipate the heat being generated by tiny high-clocked and densely packed compute cores. To circumvent these limitations massive parallelism has been introduced: more and more processors – each of them equipped with an increasing number of moderately clocked compute cores – are assembled in big systems to reach highest performance. Moreover, accelerators like general-purpose graphical processing units (GP-GPUs), field programmable gate arrays (FPGAs) or Intel’s Many Integrated Core (MIC) architectures are increasingly used to further boost the application performance. These developments have had a serious impact on applications, an effect which has a history of several decades. Application codes can only benefit from the new architectures if they are optimized with respect to massive parallelism and the incorporation of accelerators. Application scientists and experts in high-performance computing (HPC) therefore have to make common efforts to master this challenge. Additionally, a much closer collaboration between hardware architects and computational scientists has to be established to make them mutually aware of hardware and software issues and to enable robust solutions to be developed jointly. In the presentation current hardware and software developments will be introduced. By means of several activities of the Julich Supercomputing Centre it will be demonstrated how a computer centre can support and guide its users towards new technology, how new technology can be influenced by user demands, how hardware developments trigger new software developments and finally, how the successful exploitation of these technical advances leads to new scientific insights.