Improving interactive performance using TIPME

On the vast majority of today's computers, the dominant form of computation is GUI-based user interaction. In such an environment, the user's perception is the final arbiter of performance. Human-factors research shows that a user's perception of performance is affected by unexpectedly long delays. However, most performance-tuning techniques currently rely on throughput-sensitive benchmarks. While these techniques improve the average performance of the system, they do little to detect or eliminate response-time variabilities—in particular, unexpectedly long delays. We introduce a measurement infrastructure that allows us to improve user-perceived performance by helping us to identify and eliminate the causes of the unexpected long response times that users find unacceptable. We describe TIPME (The Interactive Performance Monitoring Environment), a collection of measurement tools that allowed us to quickly and easily diagnose interactive performance “bugs” in a mature operating system. We present two case studies that demonstrate the effectiveness of our measurement infrastructure. Each of the performance problems we identify drastically affects variability in response time in a mature system, demonstrating that current tuning techniques do not address this class of performance problems.

[1]  Peter Druschel,et al.  Resource containers: a new facility for resource management in server systems , 1999, OSDI '99.

[2]  Keith Bostic,et al.  The design and implementa-tion of the 4.4BSD operating system , 1996 .

[3]  James P. Held,et al.  A comparison of Windows driver model latency performance on Windows NT and Windows 98 , 1999, OSDI '99.

[4]  Alexander I. Rudnicky,et al.  A performance model of system delay and user strategy selection , 1992, CHI.

[5]  Zheng Wang,et al.  Using latency to evaluate interactive system performance , 1996, OSDI '96.

[6]  Helen Custer,et al.  Inside Windows NT , 1992 .

[7]  TPC BenchmarkTM B: Standard Specification , 1991, The Benchmark Handbook.

[8]  Stephen W. Draper,et al.  How machine delays change user strategies , 1996, SGCH.

[9]  T. W. Butler Computer response time and user performance. , 1983, CHI '83.

[10]  Margo I. Seltzer,et al.  Self-monitoring and self-adapting operating systems , 1997, Proceedings. The Sixth Workshop on Hot Topics in Operating Systems (Cat. No.97TB100133).

[11]  Margo Seltzer,et al.  Self-monitoring and self- adapting systems , 1997 .

[12]  Ben Shneiderman,et al.  Designing the user interface (2nd ed.): strategies for effective human-computer interaction , 1992 .

[13]  Margo Seltzer,et al.  Measuring windows NT: possibilities and limitations , 1997 .

[14]  Ben Shneiderman,et al.  Designing The User Interface , 2013 .

[15]  George Eckel Inside Windows NT , 1993 .

[16]  Samuel J. Leffler,et al.  The design and implementation of the 4.3 BSD Unix operating system , 1991, Addison-Wesley series in computer science.

[17]  Kevin Pagan,et al.  Inside Windows 95 , 1995 .

[18]  Carl A. Waldspurger,et al.  Lottery and stride scheduling: flexible proportional-share resource management , 1995 .

[19]  Jeffrey M. Richter Advanced Windows: The Developer's Guide to the WIN32 API for Windows NT 3.5 and Windows 95 , 1995 .

[20]  I. Scott MacKenzie,et al.  Lag as a determinant of human performance in interactive systems , 1993, INTERCHI.

[21]  Lance M. Berc,et al.  Continuous profiling: where have all the cycles gone? , 1997, ACM Trans. Comput. Syst..