Design and Development of the Prophesy Performance Database for Distributed Scientific Applications

Efficient execution of a scientific computing application requires insights into how system features impact the performance of the application. A distributed system consists of heterogeneous components, such as networks, processors, run-time systems, operating systems, etc. This heterogeneity complicates the task of gaining insights into the performance of the application. The Prophesy project [21] is an infrastructure that aids in gaining this needed insight based upon one’s experience and that of others. The core component of the Prophesy system is a relational database that allows for the recording of performance data, system features and application details to analyze and improve the performance of scientific applications. The Prophesy infrastructure can be used to develop models based upon significant performance data, identify the most efficient implementation of a given function based upon the given system configuration, explore the various trends im-

[1]  Report,et al.  Public International Benchmarks for Parallel Computers , 1993 .

[2]  Brian H. Larose The development and implementation of a performance database server , 1993, CS / Technical report / Knoxville / University of Tennessee / Computer Science Department.

[3]  Richard T. Snodgrass,et al.  A relational approach to monitoring complex systems , 1988, TOCS.

[4]  Xin Li,et al.  Prophesy: an infrastructure for analyzing and modeling the performance of parallel and distributed applications , 2000, Proceedings the Ninth International Symposium on High-Performance Distributed Computing.

[5]  Valerie E. Taylor,et al.  Performance Coupling: A Methodology for Predicting Application Performance Using Kernel Performance , 1999, PPSC.

[6]  Randal L. Schwartz,et al.  Learning Perl (2. ed.) , 1997, Unix programming.

[7]  Graham R. Nudd,et al.  PACE: A Toolset to Investigate and Predict Performance in Parallel Systems , 1996 .

[8]  Thomas Boutell CGI programming in C & Perl , 1996 .

[9]  David H. Bailey,et al.  The Nas Parallel Benchmarks , 1991, Int. J. High Perform. Comput. Appl..

[10]  B. Miller,et al.  The Paradyn Parallel Performance Measurement Tools , 1995 .

[11]  Barton P. Miller,et al.  The Paradyn Parallel Performance Measurement Tool , 1995, Computer.

[12]  Ramesh Subramonian,et al.  LogP: towards a realistic model of parallel computation , 1993, PPOPP '93.

[13]  Michael W. Berry,et al.  Public international benchmarks for parallel computers: PARKBENCH committee: Report-1 , 1994 .

[14]  Dennis Gannon,et al.  SIEVE: A Performance Debugging Environment for Parallel Programs , 1993, J. Parallel Distributed Comput..

[15]  Jack J. Dongarra,et al.  A Scalable Cross-Platform Infrastructure for Application Performance Tuning Using Hardware Counters , 2000, ACM/IEEE SC 2000 Conference (SC'00).

[16]  Csaba Andras Moritz,et al.  LoGPC: Modeling Network Contention in Message-Passing Programs , 2001, IEEE Trans. Parallel Distributed Syst..

[17]  Pankaj Mehra,et al.  Performance measurement, visualization and modeling of parallel and distributed programs using the AIMS toolkit , 1995, Softw. Pract. Exp..

[18]  D.A. Reed,et al.  Scalable performance analysis: the Pablo performance analysis environment , 1993, Proceedings of Scalable Parallel Libraries Conference.