暂无分享,去创建一个
The introduction of the Basic Linear Algebra Subroutine (BLAS) in the 1970s laid the path to the different libraries to solve the same problem with an improved approach and improved hardware. The new BLAS implementation led to High-Performance Computing (HPC) innovation, and all the love went to the level-3 BLAS due to its humongous application in different fields not bound to computer science.However, level-1 and level-2 got neglected, and we here tried to solve the problem by introducing the new algorithm for the vector-vector dot product, vector-vector outer product and matrix-vector product, which improves the performance of these operations in a significant way. We are not introducing any library but algorithms, which improves upon the current state-of-art algorithms. Also, we rely on the FMA instruction, OpenMP, and the compiler to optimize the code rather than implementing the algorithm in assembly. Therefore, our current implementation is machine oblivious and depends on the compiler’s ability to optimize the code. This paper makes the following contribution:
[1] H. Whitney,et al. An inequality related to the isoperimetric inequality , 1949 .
[2] Robert A. van de Geijn,et al. Pushing the Bounds for Matrix-Matrix Multiplication , 2017, ArXiv.
[3] Tze Meng Low,et al. Analytical Modeling Is Enough for High-Performance BLIS , 2016, ACM Trans. Math. Softw..