Challenges and Issues of Supporting Task Parallelism in MPI

Task parallelism deals with the extraction of the potential parallelism of irregular structures, which vary according to the input data, through a definition of abstract tasks and their dependencies. Shared-memory APIs, such as OpenMP and TBB, support this model and ensure performance thanks to an efficient scheduling of tasks. In this work, we provide arguments favoring the support of task parallelism in MPI. We explain how native MPI can be used to define tasks, their dependencies, and their runtime scheduling. We also discuss performance issues. Our preliminary experiments show that it is possible to implement efficient task-parallel MPI programs and to increase the range of applications covered by the MPI standard.