Graphical processing units have been gaining rising attention because of their high performance processing capabilities for many scientific and engineering applications. However, programming such highly parallel devices requires adequate programming tools. Many such programming tools have emerged and hold the promise for high levels of performance. Some of such tools may require specialized parallel programming skills, while others attempt to target the domain scientist. The costs versus the benefits out of such tools are often unclear. In this work we examine the use of several of these programming tools such as Compute Unified Device Architecture, Open Compute Language, Portland Group Inc., and MATLAB in developing kernels from the (NAS) NASA Advanced Supercomputing parallel benchmarking suite. The resulting performance as well as the needed programmers' efforts were quantified and used to characterize the productivity of graphical processing units using these different programming paradigms. Copyright © 2011 John Wiley & Sons, Ltd.
[1]
William J. Dally,et al.
The GPU Computing Era
,
2010,
IEEE Micro.
[2]
Jie Cheng,et al.
Programming Massively Parallel Processors. A Hands-on Approach
,
2010,
Scalable Comput. Pract. Exp..
[3]
John D. Owens,et al.
GPU Computing
,
2008,
Proceedings of the IEEE.
[4]
David H. Bailey,et al.
The Nas Parallel Benchmarks
,
1991,
Int. J. High Perform. Comput. Appl..
[5]
Mohamed M. Zahran,et al.
Productivity analysis of the UPC language
,
2004,
18th International Parallel and Distributed Processing Symposium, 2004. Proceedings..