Inter-process communication, MPI and MPICH in microkernel environment: A comparative analysis

Inter-process communication (IPC) is one of the crucial aspects of every microkernel. The message-passing interface (MPI) is a specification between different processes, which is used for communication amongst processes. Message Passing Interface Chameleon (MPICH) is the portable implementation of message passing interface. This paper delineates the comparison between IPC, MPI and MPICH in terms of efficiency and computational cost of the processor. Different experimentations are performed to check the efficiency of each approach. Furthermore, the paper considers the latest research carried out since 2013 to deliberate the feasibility to swap IPC with MPICH in a microkernel environment.

[1]  Felician Alecu Performance Analysis of Parallel Algorithms , 2005 .

[2]  Ida Mutia Inter-Process Communication Mechanism in Monolithic Kernel and Microkernel Rafika , 2010 .

[3]  Torsten Hoefler,et al.  Enabling highly-scalable remote memory access programming with MPI-3 one sided , 2013, 2013 SC - International Conference for High Performance Computing, Networking, Storage and Analysis (SC).

[4]  Torsten Hoefler,et al.  MPI + MPI: a new hybrid approach to parallel programming with MPI plus shared memory , 2013, Computing.

[5]  Zhiling Lan,et al.  Application power profiling on IBM Blue Gene/Q , 2013, 2013 IEEE International Conference on Cluster Computing (CLUSTER).

[6]  Alan Wagner,et al.  An integrated fine-grain runtime system for MPI , 2014, Computing.

[7]  Torsten Hoefler,et al.  Hybrid MPI: Efficient message passing for multi-core systems , 2013, 2013 SC - International Conference for High Performance Computing, Networking, Storage and Analysis (SC).

[8]  Torsten Hoefler,et al.  NUMA-aware shared-memory collective communication for MPI , 2013, HPDC.

[9]  Godmar Back,et al.  VirtuOS: an operating system with kernel virtualization , 2013, SOSP.

[10]  Yutaka Ishikawa,et al.  Proposing a new task model towards many-core architecture , 2013, MES '13.

[11]  Rajeev Thakur,et al.  Enabling MPI interoperability through flexible communication endpoints , 2013, EuroMPI.

[12]  Vivek Sarkar,et al.  Integrating Asynchronous Task Parallelism with MPI , 2013, 2013 IEEE 27th International Symposium on Parallel and Distributed Processing.

[13]  Rolf Riesen,et al.  mOS: an architecture for extreme-scale operating systems , 2014, ROSS@ICS.

[14]  Yutaka Ishikawa,et al.  Interface for heterogeneous kernels: A framework to enable hybrid OS designs targeting high performance computing on manycore architectures , 2014, 2014 21st International Conference on High Performance Computing (HiPC).

[15]  Benoît Dupont de Dinechin,et al.  Time-critical computing on a single-chip massively parallel processor , 2014, 2014 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[16]  Donald E. Porter,et al.  Cooperation and security isolation of library OSes for multi-process applications , 2014, EuroSys '14.

[17]  Martin Theobald,et al.  TriAD: a distributed shared-nothing RDF engine based on asynchronous message passing , 2014, SIGMOD Conference.

[18]  Gernot Heiser,et al.  Comprehensive formal verification of an OS microkernel , 2014, TOCS.

[19]  Deo Prakash Vidyarthi,et al.  Observing the effect of interprocess communication in auto controlled ant colony optimization‐based scheduling on computational grid , 2014, Concurr. Comput. Pract. Exp..

[20]  Rajeev Thakur,et al.  Enabling communication concurrency through flexible MPI endpoints , 2014, Int. J. High Perform. Comput. Appl..

[21]  Sameer Kumar,et al.  Scalable MPI-3.0 RMA on the Blue Gene/Q Supercomputer , 2014, EuroMPI/ASIA.

[22]  Julian Stecklina Shrinking the hypervisor one subsystem at a time: a userspace packet switch for virtual machines , 2014, VEE '14.

[23]  Uwe Baumgarten,et al.  A layered interface-adaptation architecture for distributed component-based systems , 2015, Future Gener. Comput. Syst..

[24]  Barbara M. Chapman,et al.  OpenSHMEM as a Portable Communication Layer for PGAS Models: A Case Study with Coarray Fortran , 2015, 2015 IEEE International Conference on Cluster Computing.

[25]  Sudi Mungkasi,et al.  Fast and Efficient Parallel Computations Using a Cluster of Workstations to Simulate Flood Flows , 2015, SOCO 2015.

[26]  Binoy Ravindran,et al.  Popcorn: bridging the programmability gap in heterogeneous-ISA platforms , 2015, EuroSys.

[27]  Aaron D. Ames,et al.  The Ach Library: A New Framework for Real-Time Communication , 2015, IEEE Robotics & Automation Magazine.

[28]  Jesús Carretero,et al.  Enhancing the performance of malleable MPI applications by using performance-aware dynamic reconfiguration , 2015, Parallel Comput..

[29]  Joaquín B. Ordieres Meré,et al.  Improving Manufacturing Performance by Standardization of Interprocess Communication , 2015, IEEE Transactions on Engineering Management.

[30]  Pruthvi N. Shetty,et al.  Performance Analysis of Parallel Algorithms , 2016 .

[31]  Rajeev Thakur,et al.  An implementation and evaluation of the MPI 3.0 one‐sided communication interface , 2016, Concurr. Comput. Pract. Exp..

[32]  Gerhard Fettweis,et al.  M3: A Hardware/Operating-System Co-Design to Tame Heterogeneous Manycores , 2016, ASPLOS.

[33]  Reid G. Simmons,et al.  Robotic Systems Architectures and Programming , 2008, Springer Handbook of Robotics, 2nd Ed..

[34]  Gernot Heiser,et al.  L4 Microkernels: The Lessons from 20 Years of Research and Deployment , 2016, TOCS.

[35]  A. Amer 1 Locking Aspects in Multithreaded MPI Implementations , 2016 .

[36]  Ziho Shin,et al.  Evaluation of manycore operating systems , 2016, 2016 18th International Conference on Advanced Communication Technology (ICACT).

[37]  Daniel Kroening,et al.  Precise Predictive Analysis for Discovering Communication Deadlocks in MPI Programs , 2014, FM.