hen the Java programming language was introduced by Sun Microsystems in 1995, there was a perception (properly founded at the time) that its many benefits came at a significant performance cost. The related deficiencies were especially apparent in numerical computing. Our own measurements in 1997 with second-generation Java Virtual Machines (JVMs) found differences in performance of up to one hundredfold relative to C and Fortran. Initial experience with poor performance caused many developers of high-performance numerical applications to reject Java out-of-hand as a platform for their applications. Despite the more recent progress in Java optimization, the performance of commercially available Java platforms is still not on par with state-of-the-art Fortran and C compilers. Programs using complex arithmetic exhibit particularly poor performance. Moreover, today’s Java platforms are incapable of automatically applying important optimizations to numerical code, including loop transformations and automatic parallelization [12]. Nevertheless, we find no technical barriers to high-performance computing in Java. To prove this thesis, we developed a prototype Java environment, called Numerically INtensive JAva, or NINJA, that has demonstrated that Java can obtain Fortran-like performance on a variety of problems in scientific and technical computing. NINJA has addressed such high-performance programming issues as dense and irregular matrix computations, calculations with complex numbers, automatic loop transformations, and automatic parallelization. The NINJA techniques are straightforward to implement and allow reuse of existing optimization components already deployed by software vendors for other languages [9], thus lowering the economic barriers to Java’s acceptance in numerically intensive applications. The next challenge for numerically intensive computing in Java is convincing developers and managers in this domain that Java’s benefits can be obtained with performance comparable
[1]
Samuel P. Midkiff,et al.
Quicksilver: a quasi-static compiler for Java
,
2000,
OOPSLA '00.
[2]
Michael Wolfe,et al.
High performance compilers for parallel computing
,
1995
.
[3]
Vivek Sarkar,et al.
Automatic selection of high-order transformations in the IBM XL FORTRAN compilers
,
1997,
IBM J. Res. Dev..
[4]
Mithuna Thottethodi,et al.
Nonlinear array layouts for hierarchical memory systems
,
1999,
ICS '99.
[5]
Jack J. Dongarra,et al.
Solving linear systems on vector and shared memory computers
,
1990
.
[6]
Ronald F. Boisvert,et al.
Developing numerical libraries in Java
,
1998,
Concurr. Pract. Exp..
[7]
Fred G. Gustavson,et al.
Recursion leads to automatic variable blocking for dense linear-algebra algorithms
,
1997,
IBM J. Res. Dev..
[8]
Guy L. Steele,et al.
The Java Language Specification
,
1996
.
[9]
Samuel P. Midkiff,et al.
Java programming for high-performance numerical computing
,
2000,
IBM Syst. J..
[10]
Henri Casanova,et al.
Java Access to Numerical Libraries
,
1997,
Concurr. Pract. Exp..
[11]
Samuel P. Midkiff,et al.
Automatic loop transformations and parallelization for Java
,
2000,
ICS '00.