Teaching Parallel & Distributed Computing with MPI (Abstract Only)

CS2013 brings parallel and distributed computing (PDC) into the CS curricular mainstream. The Message Passing Interface (MPI) is a platform independent, industry-standard PDC library that includes support for C, C++, and Fortran; third parties have created implementations for Python and Java. This hands-on workshop introduces MPI basics using parallel patterns, including the single program multiple data (SPMD), send-receive message passing, master-worker, parallel loop, broadcast, reduction, scatter, gather, and barrier patterns. Participants will explore 12 short programs designed to help students understand MPI and PDC basics, plus longer programs that use MPI to solve significant problems. The intended audience is CS educators who want to learn about how message passing can be used to teach PDC. No prior experience with PDC or MPI is required; familiarity with a C-family language and the command-line are helpful but not required. The workshop includes: (i) self-paced hands-on experimentation with the working MPI programs, and (ii) a discussion of how these may be used to achieve the goals of CS2013. Participants will work on a remote Beowulf cluster accessed via SSH, and will need a laptop or a tablet with an SSH client (e.g., BitVise, iSSH), or a laptop with both a recent C/C++ compiler and MPI (e.g., OpenMPI or MPICH) installed. See http://csinparallel.org.