Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized. Large, complex parallel systems pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eliding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.
[1]
Cevdet Aykanat,et al.
Large Grain Parallel Conjugate Gradient Algorithms on a Hypercube Multiprocessor
,
1987,
ICPP.
[2]
Paul R. Calder,et al.
The Design and Implementation of InterViews
,
1993,
C++ Workshop.
[3]
James Gettys,et al.
The X window system
,
1986,
TOGS.
[4]
D J Kuck,et al.
Parallel Supercomputing Today and the Cedar Approach
,
1986,
Science.
[5]
C. B. Stunkel,et al.
Hypercube implementation of the simplex algorithm
,
1989,
C3P.
[6]
Vincent A. Guarna,et al.
A portable user interface for a scientific programming environment
,
1988,
UIST '88.
[7]
D. A. Reed,et al.
Networks for parallel processors: measurements and prognostications
,
1988,
C3P.
[8]
Peter J. Denning,et al.
Working Sets Past and Present
,
1980,
IEEE Transactions on Software Engineering.
[9]
Peter J. Denning,et al.
The Operational Analysis of Queueing Network Models
,
1978,
CSUR.