Survey on High Productivity Computing Systems ( HPCS ) Languages [ Internal Report ]

Saliya Ekanayake School of Informatics and Computing, Indiana University sekanaya@cs.indiana.edu Abstract Parallel languages have been focused towards performance, but it alone is not be sufficient to overcome the barrier of developing software that exploits the power of evolving architectures. DARPA initiated high productivity computing systems (HPCS) languages project as a solution which addresses software productivity goals through language design. The resultant three languages are Chapel from Cray, X10 from IBM and Fortress from Sun. We recognize memory model (perhaps namespace model is a better term) as a classifier for parallel languages and present details on shared, distributed, and partitioned global address space (PGAS) models. Next we compare HPCS languages in detail through idioms they support for five common tasks in parallel programming, i.e. data parallelism, data distribution, asynchronous remote task creation, nested parallelism, and remote transactions. We conclude presenting complete working code for k-means clustering in each language.