Interoperability Between Software and Hardware

While WIPP has been designed for big image data experiments, its execution speed depends on the underlying hardware, which can vary with each deployment. Readers may be interested in optimizing WIPP for their hardware or in rewriting existing algorithms to better utilize available RAM, CPU, and bandwidth. In this chapter, our goal is to assist the reader in: Choosing hardware for WIPP deployment Characterizing image data to estimate storage and processing requirements Measuring execution time Leveraging several known models for parallel execution of image processing algorithms

[1]  Bertram Ludäscher,et al.  Kepler: an extensible system for design and execution of scientific workflows , 2004 .

[2]  Mary Brady,et al.  A Hybrid CPU-GPU System for Stitching Large Scale Optical Microscopy Images , 2014, 2014 43rd International Conference on Parallel Processing.

[3]  Donald. Miner,et al.  MapReduce Design Patterns: Building Effective Algorithms and Analytics for Hadoop and Other Systems , 2012 .

[4]  Phuong Nguyen,et al.  Terabyte-sized image computations on Hadoop cluster platforms , 2013, 2013 IEEE International Conference on Big Data.

[5]  Domenico Talia,et al.  Workflow Systems for Science: Concepts and Tools , 2013 .

[6]  Phuong Nguyen,et al.  Spatial computations over terabyte-sized images on hadoop platforms , 2014, 2014 IEEE International Conference on Big Data (Big Data).

[7]  John L. Gustafson,et al.  Reevaluating Amdahl's law , 1988, CACM.

[8]  Yi Yang,et al.  CPU-assisted GPGPU on fused CPU-GPU architectures , 2012, IEEE International Symposium on High-Performance Comp Architecture.

[9]  Matthew R. Pocock,et al.  Taverna: a tool for the composition and enactment of bioinformatics workflows , 2004, Bioinform..

[10]  Scott A. Mahlke,et al.  Transparent CPU-GPU collaboration for data-parallel kernels on heterogeneous systems , 2013, Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques.