Earthquake and Tsunami Workflow Leveraging the Modern HPC/Cloud Environment in the LEXIS Project

Accurate and rapid earthquake loss assessments and tsunami early warnings are critical in modern society to allow for appropriate and timely emergency response decisions. In the LEXIS project, we seek to enhance the workflow of rapid loss assessments and emergency decision support systems by leveraging an orchestrated heterogeneous environment combining high-performance computing resources and Cloud infrastructure. The workflow consists of three main applications: firstly, after an earthquake occurs, its shaking distribution (ShakeMap) is computed based on the OpenQuake code. Secondly, if a tsunami may have been triggered by the earthquake, tsunami simulations (first a fast and coarse and later a high-resolution and computationally intensive analysis) are performed based on the TsunAWI simulation code that allows for an early warning in potentially affected areas. Finally, based on the previous results, a loss assessment based on a dynamic exposure model using open data such as OpenStreetMap is performed. To consolidate the workflow and ensure respect of the time constraints, we are developing an extension of a time-constrained dataflow model of computation, layered above and below the workflow management tools of both the high-performance computing resources and the Cloud infrastructure. This model of computation is also used to express tasks in the workflow at the right granularity to benefit from the data management optimisation facilities of the LEXIS project. This paper describes the workflow, the associated computations and the model of computation within the LEXIS platform.

[1]  Natalja Rakowsky,et al.  Operational tsunami modelling with TsunAWI – recent developments and applications , 2013 .

[2]  J. Kučera,et al.  Global trends in satellite-based emergency mapping , 2016, Science.

[3]  E.A. Lee,et al.  Synchronous data flow , 1987, Proceedings of the IEEE.

[4]  Thierry Gautier,et al.  KAAPI: A thread scheduling runtime system for data flow computations on cluster of multi-processors , 2007, PASCO '07.

[5]  William H. Frey,et al.  Mesh relaxation: A new technique for improving triangulations , 1991 .

[6]  Emanuele Danovaro,et al.  HPC, Cloud and Big-Data Convergent Architectures: The LEXIS Approach , 2019, CISIS.

[7]  Naoshi Hirata,et al.  Global Dynamic Exposure and the OpenBuildingMap - Communicating Risk and Involving Communities , 2017 .

[8]  Jorge Ejarque,et al.  COMP Superscalar, an interoperable programming framework , 2015 .

[9]  Natalja Rakowsky,et al.  An evaluation of onshore digital elevation models for modeling tsunami inundation zones , 2015, Front. Earth Sci..

[10]  Albert Cohen,et al.  Transaction Parameterized Dataflow: A model for context-dependent streaming applications , 2016, 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[11]  Jan Martinovic,et al.  HyperLoom: A Platform for Defining and Executing Scientific Pipelines in Distributed Environments , 2018, PARMA-DITAM '18.

[12]  Jonathan Richard Shewchuk,et al.  Triangle: Engineering a 2D Quality Mesh Generator and Delaunay Triangulator , 1996, WACG.

[13]  Stéphane Louise,et al.  A Model of Computation for Real-Time Applications on Embedded Manycores , 2014, 2014 IEEE 8th International Symposium on Embedded Multicore/Manycore SoCs.

[14]  Jean A. Peperstraete,et al.  Cycle-static dataflow , 1996, IEEE Trans. Signal Process..