Avalon: an Alpha/Linux cluster achieves 10 Gflops for $15k

As an entry for the 1998 Gordon Bell price/performance prize, we present two calculations from the disciplines of condensed matter physics and astrophysics. The simulations were performed on a 70 processor DEC Alpha cluster (Avalon) constructed entirely from commodity personal computer technology and freely available software, for a cost of 152 thousand dollars.Avalon performed a 60 million particle molecular dynamics (MD) simulation of shock-induced plasticity using the SPaSM MD code. The beginning of this simulation sustained approximately 10 Gflops over a 44 hour period, and saved 68 Gbytes of raw data. The resulting price/performance is $15/Mflop, or equivalently, 67 Gflops per million dollars. This is more than a factor of three better than last year's Gordon Bell price/performance winners. This simulation is similar to those which won part of the 1993 Gordon Bell performance prize using a 1024-node CM-5. This simulation continued to run for a total of 332 hours on Avalon, computing a total of 1.12 x 1016 floating point operations. This puts it among the few scientific simulations to have ever involved more than 10 Petaflops of computation.Avalon also performed a gravitational treecode N-body simulation of galaxy formation using 9.75 million particles, which sustained an average of 6.78 Gflops over a 26 hour period. This simulation is exactly the same as that which won a Gordon Bell price/performance prize last year on the Loki cluster, at a total performance 7.7 times that of Loki, and a price/performance 2.6 times better than Loki. Further, Avalon ranked at 315th on the June 1998 TOP500 list, by obtaining a result of 19.3 Gflops on the parallel Linpack benchmark.

[1]  Michael S. Warren,et al.  A portable parallel particle program , 1995 .

[2]  Thomas L. Sterling,et al.  Pentium Pro Inside: I. A Treecode at 430 Gigaflops on ASCI Red, II. Price/Performance of $50/Mflop on Loki and Hyglac , 1997, ACM/IEEE SC 1997 Conference (SC'97).

[3]  David M. Beazley,et al.  Controlling the data glut in large-scale molecular-dynamics simulations , 1997 .

[4]  David M. Beazley,et al.  50 GFlops molecular dynamics on the Connection Machine-5 , 1993, Supercomputing '93. Proceedings.

[5]  Michael S. Warren,et al.  A Parallel, Portable and Versatile Treecode , 1995, PPSC.

[6]  Peter S. Lomdahl,et al.  LARGE-SCALE MOLECULAR DYNAMICS SIMULATIONS OF THREE-DIMENSIONAL DUCTILE FAILURE , 1997 .

[7]  Jack J. Dongarra,et al.  Performance of various computers using standard linear equations software in a FORTRAN environment , 1988, CARN.

[8]  David M. Beazley,et al.  Message-Passing Multi-Cell Molecular Dynamics on the Connection Machine 5 , 1994, Parallel Comput..

[9]  Preston,et al.  Large-scale molecular dynamics simulations of dislocation intersection in copper , 1998, Science.

[10]  Holian,et al.  Plasticity induced by shock waves in nonequilibrium molecular-dynamics simulations , 1998, Science.

[11]  Thomas L. Sterling,et al.  Parallel Supercomputing with Commodity Components , 1997, PDPTA.

[12]  Michael S. Warren,et al.  Astrophysical N-body simulations using hierarchical tree data structures , 1992, Proceedings Supercomputing '92.

[13]  D. M. Beazley,et al.  50 GFlops molecular dynamics on the Connection Machine 5 , 1993, Supercomputing '93.

[14]  Thomas L. Sterling,et al.  BEOWULF: A Parallel Workstation for Scientific Computation , 1995, ICPP.

[15]  M. S. Warren,et al.  A parallel hashed Oct-Tree N-body algorithm , 1993, Supercomputing '93.

[16]  David M. Beazley,et al.  Lightweight Computational Steering of Very Large Scale Molecular Dynamics Simulations , 1996, Proceedings of the 1996 ACM/IEEE Conference on Supercomputing.

[17]  Alan H. Karp Speeding up N-body Calculations on Machines without Hardware Square Root , 1992, Sci. Program..