A high-level framework for parallelizing legacy applications for multiple platforms

The tremendous growth and diversification in the area of computer architectures has contributed towards an upsurge in the number of parallel programing paradigms, languages, and environments. However, it is often difficult for domain-experts to develop expertise in multiple programming paradigms and languages in order to write performance-oriented parallel applications. Several active research projects aim at reducing the burden on programmers by raising the level of abstraction of parallel programming. However, a majority of such research projects either entail manual invasive reengineering of existing code to insert new directives for parallelization or force conformance to specific interfaces. Some systems require that the programmers rewrite their entire application in a new parallel programing language or a domain-specific language. Moreover, only a few research projects are addressing the need of a single framework for generating parallel applications for multiple hardware platforms or doing hybrid programming. This paper presents a high-level framework for parallelizing existing serial applications for multiple target platforms. The framework, currently in its prototype stage, can semi-automatically generate parallel applications for systems with both distributed-memory architectures and shared-memory architectures through MPI, OpenMP, and hybrid programming. For all the test cases considered so far, the performance of the generated parallel applications is comparable to that of the manually written parallel versions of the applications. Our approach enhances the productivity of the end-users as they are not required to learn any low-level parallel programming, shortens the parallel application development cycle for multiple platforms, and preserves the existing version of serial applications.

[1]  J. Boon The Lattice Boltzmann Equation for Fluid Dynamics and Beyond , 2003 .

[2]  M Mernik,et al.  When and how to develop domain-specific languages , 2005, CSUR.

[3]  Chengcui Zhang,et al.  Region-Based Image Clustering and Retrieval Using Multiple Instance Learning , 2005, CIVR.

[4]  Jeffrey C. Carver,et al.  Understanding the High-Performance-Computing Community: A Software Engineer's Perspective , 2008, IEEE Software.

[5]  Timothy G. Mattson,et al.  Patterns for parallel programming , 2004 .

[6]  Eric Darve,et al.  Liszt: A domain specific language for building portable mesh-based PDE solvers , 2011, 2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC).

[7]  Vivek Sarkar,et al.  Concurrent Collections Programming Model , 2010, Encyclopedia of Parallel Computing.

[8]  I Wayan Sumerta Yasa Prediksi Cuaca Jangka Pendek Menggunakan Weather Research and Forecasting (WRF) Model , 2013 .

[9]  Franz Franchetti,et al.  SPIRAL: Code Generation for DSP Transforms , 2005, Proceedings of the IEEE.

[10]  Sanjay Ghemawat,et al.  MapReduce: Simplified Data Processing on Large Clusters , 2004, OSDI.

[11]  Doug Abbott Eclipse integrated development environment , 2013 .

[12]  Purushotham Bangalore,et al.  Fraspa: a framework for synthesizing parallel applications , 2010 .

[13]  Christopher W. Pidgeon,et al.  DMS®: Program Transformations for Practical Scalable Software Evolution , 2002, IWPSE '02.

[14]  Marjan Mernik,et al.  Raising the level of abstraction for developing message passing applications , 2010, The Journal of Supercomputing.

[15]  Krzysztof Czarnecki,et al.  Generative programming - methods, tools and applications , 2000 .