Profiling e-Science infrastructures with kernel and application benchmarks

Resources benchmarking is a promising though challenging opportunity for research communities to better exploit e-Science infrastructures. The paper investigates the suitability of an integrated two-level benchmarking approach in supporting resources ranking on a performance basis. The coupling of different benchmarking techniques i.e. kernel and application benchmarks is intended to grasp the behaviour of computational environments also with respect to real workloads so as to address a more suitable and efficient assignment of resources to the applications submitted by users. To confirm the appropriateness of our approach, an experimentation on a test bed was carried out which highlighted significant differences amongst the performance figures of the two levels of benchmarks. Some useful rules of thumb emerged from our experimentation that could be taken into account to fruitfully adopt this double-space approach to similar scenarios of use.

[1]  Rolf Rabenseifner,et al.  Effective File-I/O Bandwidth Benchmark , 2000, Euro-Par.

[2]  Jesper Larsson Träff,et al.  SKaMPI: a comprehensive benchmark for public benchmarking of MPI , 2002, Sci. Program..

[3]  Marc-André Hermanns,et al.  DEISA: Extreme Computing in an Advanced Supercomputing Environment , 2007, PARCO.

[4]  David H. Bailey,et al.  Performance Modeling: Understanding the Past and Predicting the Future , 2005, Euro-Par.

[5]  Marios D. Dikaiakos,et al.  Grid benchmarking: vision, challenges, and current status , 2007, Concurr. Comput. Pract. Exp..

[6]  Shantenu Jha,et al.  Exploring application and infrastructure adaptation on hybrid grid-cloud infrastructure , 2010, HPDC '10.

[7]  Lavanya Ramakrishnan,et al.  Comparison of resource platform selection approaches for scientific workflows , 2010, HPDC '10.

[8]  Andrea Clematis,et al.  Job-resource matchmaking on Grid through two-level benchmarking , 2010, Future Gener. Comput. Syst..

[9]  Arnold W. M. Smeulders,et al.  A Minimum Cost Approach for Segmenting Networks of Lines , 2001, International Journal of Computer Vision.

[10]  Andrea Clematis,et al.  A Grid framework to enable parallel and concurrent TMA image analyses , 2009, Int. J. Grid Util. Comput..

[11]  Warren Smith,et al.  Predicting Application Run Times Using Historical Information , 1998, JSSPP.

[12]  Rajkumar Buyya,et al.  High-Performance Cloud Computing: A View of Scientific Applications , 2009, 2009 10th International Symposium on Pervasive Systems, Algorithms, and Networks.

[13]  Geppino Pucci,et al.  Obtaining Performance Measures through Microbenchmarking in a Peer-to-Peer Overlay Computer , 2007, Computational Intelligence in Security for Information Systems.

[14]  Andrea Clematis,et al.  An Online Parallel Algorithm for Remote Visualization of Isosurfaces , 2003, PVM/MPI.

[15]  Marios D. Dikaiakos,et al.  Characterization of Computational Grid Resources Using Low-Level Benchmarks , 2006, 2006 Second IEEE International Conference on e-Science and Grid Computing (e-Science'06).

[16]  Mohamed Sayeed,et al.  Measuring High-Performance Computing with Real Applications , 2008, Computing in Science & Engineering.

[17]  Charles L. Lawson,et al.  Basic Linear Algebra Subprograms for Fortran Usage , 1979, TOMS.

[18]  Andrea Clematis,et al.  An Object Interface for Interoperability of Image Processing Parallel Library in a Distributed Environment , 2005, ICIAP.

[19]  José Manuel Cotos,et al.  Retelab: A geospatial grid web laboratory for the oceanographic research community , 2010, Future Gener. Comput. Syst..

[20]  Marios D. Dikaiakos,et al.  GridBench: A tool for the interactive performance exploration of Grid infrastructures , 2007, J. Parallel Distributed Comput..

[21]  Daisuke Takahashi,et al.  The HPC Challenge (HPCC) benchmark suite , 2006, SC.