Parallel systems are important computing platforms because they offer tremendous potential to solve inherently parallel and computation intensive applications. Performance is always a key consideration in determining the success of such systems. Evaluating and analyzing parallel system is difficult due to the complex interaction between application characteristics and architectural features. Traditional performance methodologies like experimental measurement, theoretical/analytical modeling and simulation naturally apply to the performance evaluation of parallel systems. Experimental measurement uses real or synthetic workloads, usually known as benchmarks, to evaluate and analyze their performance on actual hardware. Theoretical/analytical models try to abstract details of a parallel system. Simulation and other performance monitoring/visualization tools are extremely popular because they can capture the dynamic nature of the interaction between applications and architectures. Each of them has several types. For example, experimental measurement has software, hardware, and hybrid. Theoretical/analytical modeling has queueing network, petri net, etc. and simulation has discrete event, trace/execution driven, Monte Carlo. All of them have their own advantages and disadvantages. The first part of this paper will concentrate on identifying parameters for carrying out a comparative survey on these techniques and second part will justify the need for some kind of modelling approach which combines the advantages of all the three performance evaluation techniques and lastly paper will be focusing on an integrated model combining all the three techniques and using knowledge-based systems to evaluate the performance of parallel systems. This paper also discusses certain issues like selecting an appropriate metric for evaluating parallel systems; need to select proper workload and workload characterization.
[1]
David H. Bailey,et al.
The Nas Parallel Benchmarks
,
1991,
Int. J. High Perform. Comput. Appl..
[2]
Leigh R. Power,et al.
Design and Use of a Program Execution Analyzer
,
1983,
IBM Syst. J..
[3]
Geoffrey C. Fox,et al.
The Perfect Club Benchmarks: Effective Performance Evaluation of Supercomputers
,
1989,
Int. J. High Perform. Comput. Appl..
[4]
Dan W. Patterson,et al.
Introduction to artificial intelligence and expert systems
,
1990
.
[5]
Allen D. Malony,et al.
Performance Measurement Intrusion and Perturbation Analysis
,
1992,
IEEE Trans. Parallel Distributed Syst..
[6]
Steven Fortune,et al.
Parallelism in random access machines
,
1978,
STOC.
[7]
Anand Sivasubramaniam,et al.
A Comparative Evaluation of Techniques for Studying Parallel System Performance
,
1994
.
[8]
Phillip B. Gibbons.
A more practical PRAM model
,
1989,
SPAA '89.
[9]
Bernhard Plattner,et al.
Monitoring Program Execution: A Survey.
,
1981
.
[10]
Jack Worlton,et al.
Toward a taxonomy of performance metrics
,
1991,
Parallel Comput..