Timing trials, or the trials of timing: experiments with scripting and user-interface languages
暂无分享,去创建一个
This paper describes some basic experiments to see how fast various popular scripting and user-interface languages run on a spectrum of representative tasks. We found enormous variation in performance, depending on many factors, some uncontrollable and even unknowable. There seems to be little hope of predicting performance in other than a most general way; if there is a single clear conclusion, it is that no benchmark result should ever be taken at face value. A few general principles hold:
Compiled code usually runs faster than interpreted code: the more a program has been ‘compiled’ before it is executed, the faster it will run.
Memory-related issues and the effects of memory hierarchies are pervasive: how memory is managed, from hardware caches to garbage collection, can change runtimes dramatically. Yet users have no direct control over most aspects of memory management.
The timing services provided by programs and operating systems are woefully inadequate. It is difficult to measure runtimes reliably and repeatably even for small, purely computational kernels, and it becomes significantly harder when a program does much I/O or graphics.
Although each language shines in some situations, there are visible and sometimes surprising deficiencies even in what should be mainstream applications. We encountered more than a few bugs, size limitations, maladroit features, and total mysteries. © 1998 John Wiley & Sons, Ltd.
[1] Allan R. Wilks,et al. Analysis of Data From the Places Rated Almanac , 1987 .
[2] Brian W. Kernighan. An AWK to C++ Translator , 1991, C++ Conference.