Asynchronous In Situ Connected-Components Analysis for Complex Fluid flows

The simulation of multiscale physics is an important challenge for scientific computing. For this class of problem, large three-dimensional simulations are performed to advance scientific inquiry. On massively parallel computing systems, the volume of data generated by such approaches can become a productivity bottleneck if the raw data generated from the simulation is analyzed in a post-processing step. To address this, we present a physics-based framework for in situ data reduction that is theoretically grounded in multiscale averaging theory. We show how task parallelism can be exploited to concurrently perform a variety of analysis tasks with data-dependent costs, including the generation of iso-surfaces, morphological analyses, and connected components analysis. All analyses are performed in parallel using distributed memory and use the same domain decomposition as the simulation. A task management framework is constructed to leverage available parallelism within a node for analysis. The capabilities of the framework are to launch asynchronous analysis threads, manage dependencies between different tasks, promote data locality and minimize the impact of data transfers. The framework is applied to analyze GPU-based simulations of two-fluid-phase flow in porous media, generating a set of averaged measures that represents the overall system behavior. We demonstrate how the approach can be applied to perform physically-consistent analysis over fluid sub-regions determined from connected components analysis. Simulations performed on Oak Ridge National Lab's Titan supercomputer are profiled to demonstrate the performance of the associated multi-threaded in situ analysis approach for typical production simulation of two-fluid-phase flow.

[1]  Hartmut Kaiser,et al.  HPX: A Task Based Programming Model in a Global Address Space , 2014, PGAS.

[2]  William G. Gray,et al.  Introduction to the Thermodynamically Constrained Averaging Theory for Porous Medium Systems , 2014 .

[3]  Alexander Aiken,et al.  Towards Asynchronous Many-Task in Situ Data Analysis Using Legion , 2016, 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW).

[4]  William G. Gray,et al.  Approximation of interfacial properties in multiphase porous medium systems , 2007 .

[5]  W. Gray,et al.  Tracking interface and common curve dynamics for two-fluid flow in porous media , 2016, Journal of Fluid Mechanics.

[6]  Kelly P. Gaither,et al.  A Distributed-Memory Algorithm for Connected Components Labeling of Simulation Data , 2015, Topological and Statistical Methods for Complex Data, Tackling Large-Scale, High-Dimensional, and Multivariate Data Spaces.

[7]  David A. Bader,et al.  Parallel Algorithms for Image Histogramming and Connected Components with an Experimental Study , 1996, J. Parallel Distributed Comput..

[8]  Cass T. Miller,et al.  Influence of phase connectivity on the relationship among capillary pressure, fluid saturation, and interfacial area in two-fluid-phase porous medium systems. , 2016, Physical review. E.

[9]  E. Weinan Principles of Multiscale Modeling , 2011 .

[10]  Wu-chun Feng,et al.  Petascale Application of a Coupled CPU-GPU Algorithm for Simulation and Analysis of Multiphase Flow Solutions in Porous Medium Systems , 2014, 2014 IEEE 28th International Parallel and Distributed Processing Symposium.

[11]  Jeremy Iverson,et al.  Evaluation of connected-component labeling algorithms for distributed-memory systems , 2015, Parallel Comput..

[12]  Jan Prins,et al.  A novel heterogeneous algorithm to simulate multiphase flow in porous media on multicore CPU-GPU systems , 2014, Comput. Phys. Commun..

[13]  Oliver Rübel,et al.  WarpIV: In Situ Visualization and Analysis of Ion Accelerator Simulations , 2016, IEEE Computer Graphics and Applications.

[14]  Bo Zhang,et al.  Asynchronous Task Scheduling of the Fast Multipole Method Using Various Runtime Systems , 2014, 2014 Fourth Workshop on Data-Flow Execution Models for Extreme Scale Computing.

[15]  Alexander G. Schwing,et al.  From connected pathway flow to ganglion dynamics , 2015 .