Fast parallel multidimensional FFT using advanced MPI

We present a new method for performing global redistributions of multidimensional arrays essential to parallel fast Fourier (or similar) transforms. Traditional methods use standard all-to-all collective communication of contiguous memory buffers, thus necessary requiring local data realignment steps intermixed in-between redistribution and transform steps. Instead, our method takes advantage of subarray datatypes and generalized all-to-all scatter/gather from the MPI-2 standard to communicate discontiguous memory buffers, effectively eliminating the need for local data realignments. Despite generalized all-to-all communication of discontiguous data being generally slower, our proposal economizes in local work. For a range of strong and weak scaling tests, we found the overall performance of our method to be on par and often better than well-established libraries like MPI-FFTW, P3DFFT, and 2DECOMP&FFT. We provide compact routines implemented at the highest possible level using the MPI bindings for the C programming language. These routines apply to any global redistribution, over any two directions of a multidimensional array, decomposed on arbitrary Cartesian processor grids (1D slabs, 2D pencils, or even higher-dimensional decompositions). The high level implementation makes the code easy to read, maintain, and eventually extend. Our approach enables for future speedups from optimizations in the internal datatype handling engines within MPI implementations.

[1]  Truong Vinh Truong Duy,et al.  A decomposition method with minimum communication amount for parallelization of multi-dimensional FFTs , 2014, Comput. Phys. Commun..

[2]  Ning Li,et al.  2DECOMP&FFT - A Highly Scalable 2D Decomposition Library and FFT Interface , 2010 .

[3]  Chris H. Q. Ding,et al.  A Portable 3D FFT Package for Distributed-Memory Parallel Architectures , 1995, PPSC.

[4]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[5]  Reuben D. Budiardja,et al.  Parallel FFT-based Poisson solver for isolated three-dimensional systems , 2011, Comput. Phys. Commun..

[6]  Mikael Mortensen Shenfun -- automating the spectral Galerkin method , 2017 .

[7]  Torsten Hoefler,et al.  Parallel Zero-Copy Algorithms for Fast Fourier Transform and Conjugate Gradient Using MPI Datatypes , 2010, EuroMPI.

[8]  Mario A. Storti,et al.  MPI for Python: Performance improvements and MPI-2 extensions , 2008, J. Parallel Distributed Comput..

[9]  Jack Dongarra,et al.  Recent Advances in the Message Passing Interface - 17th European MPI Users' Group Meeting, EuroMPI 2010, Stuttgart, Germany, September 12-15, 2010. Proceedings , 2010, EuroMPI.

[10]  Bilel Hadri,et al.  Scaling of a Fast Fourier Transform and a pseudo-spectral fluid solver up to 196608 cores , 2018, J. Parallel Distributed Comput..

[11]  Vipin Kumar,et al.  The Scalability of FFT on Parallel Computers , 1993, IEEE Trans. Parallel Distributed Syst..

[12]  Michael Pippig PFFT: An Extension of FFTW to Massively Parallel Architectures , 2013, SIAM J. Sci. Comput..

[13]  Torsten Hoefler,et al.  Using Advanced MPI: Modern Features of the Message-Passing Interface , 2014 .

[14]  Hans Petter Langtangen,et al.  High performance Python for direct numerical simulations of turbulent flows , 2016, Comput. Phys. Commun..

[15]  Jesús Carretero,et al.  Recent advances in the Message Passing Interface , 2014, Int. J. High Perform. Comput. Appl..

[16]  Ian T. Foster,et al.  Parallel Algorithms for the Spectral Transform Method , 1997, SIAM J. Sci. Comput..

[17]  Steven G. Johnson,et al.  The Design and Implementation of FFTW3 , 2005, Proceedings of the IEEE.

[18]  Dmitry Pekurovsky,et al.  P3DFFT: A Framework for Parallel Computations of Fourier Transforms in Three Dimensions , 2012, SIAM J. Sci. Comput..

[19]  Mikael Mortensen,et al.  Shenfun: High performance spectral Galerkin computing platform , 2018, J. Open Source Softw..

[20]  Amith R. Mamidala,et al.  Scaling alltoall collective on multi-core systems , 2008, 2008 IEEE International Symposium on Parallel and Distributed Processing.