Comparing Message Passing Interface and MapReduce for large-scale parallel ranking and selection

We compare two methods for implementing ranking and selection algorithms in large-scale parallel computing environments. The Message Passing Interface (MPI) provides the programmer with complete control over sending and receiving messages between cores, and is fragile with regard to core failures or messages going awry. In contrast, MapReduce handles all communication and is quite robust, but is more rigid in terms of how algorithms can be coded. As expected in a high-performance computing context, we find that MPI is the more efficient of the two environments, although MapReduce is a reasonable choice. Accordingly, MapReduce may be attractive in environments where cores can stall or fail, such as is possible in low-budget cloud computing.

[1]  Jun Luo,et al.  Fully Sequential Procedures for Large-Scale Ranking-and-Selection Problems in Parallel Computing Environments , 2015, Oper. Res..

[2]  Susan R. Hunter,et al.  A comparison of two parallel ranking and selection procedures , 2014, Proceedings of the Winter Simulation Conference 2014.

[3]  Philip Heidelberger,et al.  Discrete event simulations and parallel processing: statistical properties , 1988 .

[4]  Barry L. Nelson,et al.  Chapter 17 Selecting the Best System , 2006, Simulation.

[5]  Margaret J. Robertson,et al.  Design and Analysis of Experiments , 2006, Handbook of statistics.

[6]  Julie L. Swann,et al.  Simple Procedures for Selecting the Best Simulated System When the Number of Alternatives is Large , 2001, Oper. Res..

[7]  Philip Heidelberger,et al.  Analysis of parallel replicated simulations under a completion time constraint , 1991, TOMC.

[8]  Insup Lee,et al.  Distributed Web-based simulation optimization , 2000, 2000 Winter Simulation Conference Proceedings (Cat. No.00CH37165).

[9]  Philip Heidelberger,et al.  Bias Properties of Budget Constrained Simulations , 1990, Oper. Res..

[10]  Susan R. Hunter,et al.  Efficient Ranking and Selection in Parallel Computing Environments , 2015, Oper. Res..

[11]  Marina Schmid,et al.  Design And Analysis Of Experiments For Statistical Selection Screening And Multiple Comparisons , 2016 .

[12]  B. Nelson,et al.  Using common random numbers for indifference-zone selection and multiple comparisons in simulation , 1995 .

[13]  E. Jack Chen Using parallel and distributed computing to increase the capability of selection procedures , 2005, Proceedings of the Winter Simulation Conference, 2005..

[14]  Susan R. Hunter,et al.  Ranking and selection in a high performance computing environment , 2013, 2013 Winter Simulations Conference (WSC).

[15]  F. Al-Shamali,et al.  Author Biographies. , 2015, Journal of social work in disability & rehabilitation.

[16]  Hyunbo Cho,et al.  Web Services-Based Parallel Replicated Discrete Event Simulation for Large-Scale Simulation Optimization , 2009, Simul..

[17]  Y. Rinott On two-stage selection procedures and related probability-inequalities , 1978 .

[18]  A. Tamhane Design and Analysis of Experiments for Statistical Selection, Screening, and Multiple Comparisons , 1995 .