Towards Reversible Basic Linear Algebra Subprograms: A Performance Study

Problems such as fault tolerance and scalable synchronization can be efficiently solved using reversibility of applications. Making applications reversible by relying on computation rather than on memory is ideal for large scale parallel computing, especially for the next generation of supercomputers in which memory is expensive in terms of latency, energy, and price. In this direction, a case study is presented here in reversing a computational core, namely, Basic Linear Algebra Subprograms (BLAS), which is widely used in scientific applications. A new Reversible BLAS (RBLAS) library interface has been designed, and a prototype has been implemented with two modes: (1) a memory-mode in which reversibility is obtained by checkpointing to memory, and (2) a computational-mode in which nothing is saved, and restoration is done entirely via inverse computation. The article is focused on detailed performance benchmarking to evaluate the runtime dynamics and performance effects, comparing reversible computation with checkpointing on both traditional CPU platforms and recent GPU accelerator platforms. For BLAS Level-1 subprograms, data indicates over an order of magnitude speed up of reversible computation compared to checkpointing. For BLAS Level-2 and Level-3, a more complex tradeoff is observed between reversible computation and checkpointing, depending on computational and memory complexities of the subprograms.

[1]  Christopher D. Carothers,et al.  Warp speed: executing time warp on 1,966,080 cores , 2013, SIGSIM-PADS.

[2]  Robert A. van de Geijn,et al.  High-performance implementation of the level-3 BLAS , 2008, TOMS.

[3]  Charles L. Lawson,et al.  Basic Linear Algebra Subprograms for Fortran Usage , 1979, TOMS.

[4]  Mario Cannataro,et al.  Euro-Par 2011: Parallel Processing Workshops , 2011, Lecture Notes in Computer Science.

[5]  James Demmel,et al.  Design, implementation and testing of extended and mixed precision BLAS , 2000, TOMS.

[6]  Jack J. Dongarra,et al.  A set of level 3 basic linear algebra subprograms , 1990, TOMS.

[7]  Kalyan S. Perumalla,et al.  Reverse computation for rollback-based fault tolerance in large parallel systems , 2013, Cluster Computing.

[8]  Chris H. Q. Ding,et al.  Using accurate arithmetics to improve numerical reproducibility and stability in parallel applications , 2000, ICS '00.

[9]  Thierry Gautier,et al.  Impact of Over-Decomposition on Coordinated Checkpoint/Rollback Protocol , 2011, Euro-Par Workshops.

[10]  Vinod Tipparaju,et al.  Discrete Event Execution with One-Sided and Two-Sided GVT Algorithms on 216,000 Processor Cores , 2014, TOMC.

[11]  Tadashi Dohi,et al.  Comparing Checkpoint and Rollback Recovery Schemes in a Cluster System , 2012, ICA3PP.

[12]  Michael P. Frank,et al.  Introduction to reversible computing: motivation, progress, and challenges , 2005, CF '05.

[13]  Kalyan S. Perumalla,et al.  Introduction to Reversible Computing , 2013 .