Analysis of topology-dependent MPI performance on Gemini networks
暂无分享,去创建一个
Rajeev Thakur | William Gropp | Pavan Balaji | Antonio J. Peña | James Dinan | Ralf G. Correa Carvalho | R. Thakur | W. Gropp | James Dinan | P. Balaji
[1] Abhinav Vishnu,et al. Evaluating the Potential of Cray Gemini Interconnect for PGAS Communication Runtime Systems , 2011, 2011 IEEE 19th Annual Symposium on High Performance Interconnects.
[2] Darius Buntinas,et al. A uGNI-Based MPICH2 Nemesis Network Module for the Cray XE , 2011, EuroMPI.
[3] Rajeev Thakur,et al. Toward message passing for a million processes: characterizing MPI on a massive scale blue gene/P , 2009, Computer Science - Research and Development.
[4] Rajeev Thakur,et al. Non-data-communication Overheads in MPI: Analysis on Blue Gene/P , 2008, PVM/MPI.
[5] Sadaf R. Alam,et al. Evaluation of Inter- and Intra-node Data Transfer Efficiencies between GPU Devices and their Impact on Scalable Applications , 2013, 2013 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing.
[6] Torsten Hoefler,et al. Generic topology mapping strategies for large-scale parallel architectures , 2011, ICS '11.
[7] Hugo Mills,et al. Scalable Node Allocation for Improved Performance in Regular and Anisotropic 3D Torus Supercomputers , 2011, EuroMPI.