Rapid development of cloud-native intelligent data pipelines for scientific data streams using the HASTE Toolkit

This paper introduces the HASTE Toolkit, a cloud-native software toolkit capable of partitioning data streams in order to prioritize usage of limited resources. This in turn enables more efficient data-intensive experiments. We propose a model that introduces automated, autonomous decision making in data pipelines, such that a stream of data can be partitioned into a tiered or ordered data hierarchy. Importantly, the partitioning is online and based on data content rather than a priori metadata. At the core of the model are interestingness functions and policies. Interestingness functions assign a quantitative measure of interestingness to a single data object in the stream, an interestingness score. Based on this score, a policy guides decisions on how to prioritize computational resource usage for a given object. The HASTE Toolkit is a collection of tools to adapt data stream processing to this pipeline model. The result is smart data pipelines capable of effective or even optimal use of e.g. storage, compute and network bandwidth, to support experiments involving rapid processing of scientific data characterized by large individual data object sizes. We demonstrate the proposed model and our toolkit through two microscopy imaging case studies, each with their own interestingness functions, policies, and data hierarchies. The first deals with a high content screening experiment, where images are analyzed in an on-premise container cloud with the goal of prioritizing the images for storage and subsequent computation. The second considers edge processing of images for upload into the public cloud for a real-time control loop for a transmission electron microscope. Key Points We propose a pipeline model for building intelligent pipelines for streams, accounting for actual information content in data rather than a priori metadata, and present the HASTE Toolkit, a cloud-native software toolkit for supporting rapid development according to the proposed model. We demonstrate how the HASTE Toolkit enables intelligent resource optimization in two image analysis case studies based on a) high-content imaging and b) transmission electron microscopy. We highlight the challenges of storage, processing and transfer in streamed high volume, high velocity scientific data for both cloud and cloud-edge use cases.

[1]  Yan Zhang,et al.  Streaming visualisation of quantitative mass spectrometry data based on a novel raw signal decomposition method , 2015, Proteomics.

[2]  Karen E. Petrie,et al.  Real-time processing of proteomics data: The internet of things and the connected laboratory , 2016, 2016 IEEE International Conference on Big Data (Big Data).

[3]  Roberto Marabini,et al.  ScipionCloud: An integrative and interactive gateway for large scale cryo electron microscopy image processing on commercial and academic clouds. , 2017, Journal of structural biology.

[4]  Fouad A. Tobagi,et al.  Modeling and Dimensioning Hierarchical Storage Systems for Low-Delay Video Services , 2003, IEEE Trans. Computers.

[5]  Ola Spjuth,et al.  SNIC Science Cloud (SSC): A National-Scale Cloud Infrastructure for Swedish Academia , 2017, 2017 IEEE 13th International Conference on e-Science (e-Science).

[6]  Andreas Hellander,et al.  HarmonicIO: Scalable Data Stream Processing for Scientific Datasets , 2018, 2018 IEEE 11th International Conference on Cloud Computing (CLOUD).

[7]  Weisong Shi,et al.  The Promise of Edge Computing , 2016, Computer.

[8]  Johan Karlsson,et al.  Adapting the Secretary Hiring Problem for Optimal Hot-Cold Tier Placement Under Top-K Workloads , 2019, 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID).

[9]  Wei Ouyang,et al.  The imaging tsunami: Computational opportunities and challenges , 2017 .

[10]  Andreas Hellander,et al.  Apache Spark Streaming, Kafka and HarmonicIO: A Performance Benchmark and Architecture Comparison for Enterprise and Scientific Computing , 2019, Bench.

[11]  Anne E Carpenter,et al.  Workflow and Metrics for Image Quality Control in Large-Scale High-Content Screens , 2012, Journal of biomolecular screening.

[12]  Cristina Y. González,et al.  htsget: a protocol for securely streaming genomic data , 2018, Bioinform..

[13]  Rui Liu,et al.  Draining the Data Swamp: A Similarity-based Approach , 2018, HILDA@SIGMOD.

[14]  Samuel B. Williams,et al.  ASSOCIATION FOR COMPUTING MACHINERY , 2000 .

[15]  Andreas Hellander,et al.  Resource- and Message Size-Aware Scheduling of Stream Processing at the Edge with application to Realtime Microscopy , 2019, ArXiv.

[16]  Anne E Carpenter,et al.  Quality Control for High-Throughput Imaging Experiments Using Machine Learning in Cellprofiler. , 2018, Methods in molecular biology.

[17]  Ola Spjuth,et al.  Deep Learning With Conformal Prediction for Hierarchical Analysis of Large-Scale Whole-Slide Tissue Images , 2020, IEEE Journal of Biomedical and Health Informatics.

[18]  M. Schatz,et al.  Big Data: Astronomical or Genomical? , 2015, PLoS biology.

[19]  Sandra Geisler,et al.  Constance: An Intelligent Data Lake System , 2016, SIGMOD Conference.

[20]  Anne E Carpenter,et al.  CellProfiler 3.0: Next-generation image processing for biology , 2018, PLoS biology.

[21]  Ola Spjuth,et al.  Container-based bioinformatics with Pachyderm , 2018, bioRxiv.

[22]  Z. Irani,et al.  Critical analysis of Big Data challenges and analytical methods , 2017 .

[23]  Gary Siuzdak,et al.  Metabolomic data streaming for biology-dependent data acquisition , 2014, Nature Biotechnology.

[24]  C O S Sorzano,et al.  Scipion: A software framework toward integration, reproducibility and validation in 3D electron microscopy. , 2016, Journal of structural biology.

[25]  Simon Fong,et al.  Robust High-dimensional Bioinformatics Data Streams Mining by ODR-ioVFDT , 2017, Scientific Reports.