Preconditioner Auto-Tuning Using Deep Learning for Sparse Iterative Algorithms

In numerical libraries for sparse matrix operations, there are many tuning parameters related to implementation selection. Selection of different tuning parameters could result in totally different performance. Moreover, optimal implementation depends on the sparse matrices to be operated. It is difficult to find optimal implementation without executing each implementation and thereby examining its performance on a given sparse matrix. In this study, we propose an implementation selection method for sparse iterative algorithms and preconditioners in a numerical library using deep learning. The proposed method uses full color images to represent the features of a sparse matrix. We present an image generation method for partitioning a given matrix (to generate its feature image) so that the value of each matrix element is considered in the implementation selection. We then evaluate the effectiveness of the proposed method by conducting a numerical experiment. In this experiment, the accuracy of implementation selection is evaluated. The training data comprise a pair of sparse matrix and its optimal implementation. The optimal implementation of each sparse matrix in the training data is obtained in advance by executing every implementation and getting the best one. The experimental results obtained using the proposed method show that the accuracy of selecting the optimal implementation of each sparse matrix is 79.5%.

[1]  Michael Garland,et al.  Nitro: A Framework for Adaptive Code Variant Tuning , 2014, 2014 IEEE 28th International Parallel and Distributed Processing Symposium.

[2]  Hang Cui,et al.  A Code Selection Mechanism Using Deep Learning , 2016, 2016 IEEE 10th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSOC).

[3]  Takahiro Katagiri Auto-Tuning for the Era of Relatively High Bandwidth Memory Architectures: A Discussion Based on an FDM Application , 2018, 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW).

[4]  Takahiro Katagiri,et al.  Auto-Tuning on NUMA and Many-Core Environments with an FDM Code , 2017, 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW).