Local supercomputing training in the computational sciences using remote national centers

Local training for high performance computing using remote national supercomputing centers is quite different from training at the centers themselves or using local machines. The local site computing and communication resources are a fraction of those available at the national centers. However, training at the local site has the potential of training more computational science and engineering students in high performance computing by including those who are unable to travel to the national center for training. The experience gained from supercomputing courses and workshops in the last 17 years at the University of Illinois at Chicago is described. These courses serve as the kernel in the program for training computational science and engineering students. Many training techniques are illustrated, such as key local user's guides and starter problems that would be portable to other local sites. Training techniques are continually evolving to keep up with rapid changes in supercomputing. An essential feature of this program is the use of real supercomputer time on several supercomputer platforms at national centers with emphasis in solving large scale problems.

[1]  Rajkumar Buyya,et al.  High Performance Cluster Computing: Architectures and Systems , 1999 .

[2]  Ann Solem,et al.  A training program for scientific supercomputing users , 1988, Proceedings. SUPERCOMPUTING '88.

[3]  Michael Allen,et al.  Parallel programming: techniques and applications using networked workstations and parallel computers , 1998 .

[4]  J. Ortega Introduction to Parallel and Vector Solution of Linear Systems , 1988, Frontiers of Computer Science.

[5]  Michael J. Quinn,et al.  Designing Efficient Algorithms for Parallel Computers , 1987 .

[6]  Rajkumar Buyya,et al.  High Performance Cluster Computing , 1999 .

[7]  Gene H. Golub,et al.  Scientific computing: an introduction with parallel computing , 1993 .

[8]  David A. Patterson,et al.  Computer Architecture: A Quantitative Approach , 1969 .

[9]  F. Hanson Computational stochastic dynamic programming on a vector multiprocessor , 1991 .

[10]  Kai Hwang,et al.  Advanced computer architecture - parallelism, scalability, programmability , 1992 .

[11]  Jack Dongarra,et al.  Numerical Linear Algebra for High-Performance Computers , 1998 .

[12]  Hai Jin,et al.  Cluster computing in the classroom: topics, guidelines, and experiences , 2001, Proceedings First IEEE/ACM International Symposium on Cluster Computing and the Grid.

[13]  Danny C. Sorensen,et al.  The SCHEDULE Parallel Programming Package with Recycling Job Queues and Iterated Dependency Graphs , 1990, Concurr. Pract. Exp..

[14]  Gitta Domik,et al.  Introduction to High-Performance Scientific Computing , 1996 .

[15]  Floyd B. Hanson,et al.  A real introduction to supercomputing: a user training course , 1990, Proceedings SUPERCOMPUTING '90.

[16]  Robert E. Benner,et al.  Development of Parallel Methods for a $1024$-Processor Hypercube , 1988 .

[17]  David E. Culler,et al.  A case for NOW (networks of workstation) , 1995, PODC '95.

[18]  John M. Levesque,et al.  A guidebook to Fortran on supercomputers , 1989 .

[19]  Chinhyun Kim,et al.  Parallel computing in the undergraduate curriculum , 1998, SIGCSE '98.

[20]  B. Wilkinson,et al.  A state-wide senior parallel programming course , 1999 .

[21]  Gary B. Lamont,et al.  Load balancing for heterogeneous clusters of PCs , 2002, Future Gener. Comput. Syst..