The simulation of coupled problems requires frequent exchange of data among the different domains. This can be achieved through the use of a mapping algorithm, which will ensure that
mapped quantities have corresponding values on either side of the coupling interface, even when the interfaces themselves are moving or do not precisely match each other.
There is a clear trend in high performance computing (HPC) towards calculation servers composed of many individual processors. Computation on such clusters requires the use of a distributed memory programming model, in which not all data is readily available during the simulation. This requires a careful design of the code to avoid introducing bottlenecks where
calculations must wait for data to be exchanged between the parallel subdomains. In the context of mapping algorithms, distributed memory programs represent an additional
challenge, since the data to be exchanged not only lies on different domains but is also distributed among multiple physical processors. We will address the practical aspects of the im-
plementation of interface mapping functions to work in conjunction with MPI parallel finite element solvers for each coupled domain. The focus will be in the aspects that are specific to the parallel performance of the algorithm, such as identifying the origin–destination pairs for the interpolation when those lie on different processes, constructing a communication strategy to efficiently exchange data in distributed memory and the limitations that the MPI context
imposes on the mapping algorithm itself. Different mapping algorithms, namely nearest node, element interpolation and mortar-type methods, will be considered. The proposed approach is being implemented within the Kratos MultiPhysics finite element framework and validated through the solution of model problems, allowing the quantification of both the accuracy and the parallel performance of each algorithm, with the end goal of using the resulting implementation for the simulation of FSI problems in an MPI context.