Declarative coordination of graph-based parallel programs

Declarative programming has been hailed as a promising approach to parallel programming since it makes it easier to reason about programs while hiding the implementation details of parallelism from the programmer. However, its advantage is also its disadvantage as it leaves the programmer with no straightforward way to optimize programs for performance. In this paper, we introduce Coordinated Linear Meld (CLM), a concurrent forward-chaining linear logic programming language, with a declarative way to coordinate the execution of parallel programs allowing the programmer to specify arbitrary scheduling and data partitioning policies. Our approach allows the programmer to write graph-based declarative programs and then optionally to use coordination to fine-tune parallel performance. In this paper we specify the set of coordination facts, discuss their implementation in a parallel virtual machine, and show---through example---how they can be used to optimize parallel execution. We compare the performance of CLM programs against the original uncoordinated Linear Meld and several other frameworks.

[1]  Edsger W. Dijkstra,et al.  A note on two problems in connexion with graphs , 1959, Numerische Mathematik.

[2]  Keshav Pingali,et al.  Elixir: a system for synthesizing concurrent graph programs , 2012, OOPSLA '12.

[3]  Ion Stoica,et al.  Declarative networking: language, execution and optimization , 2006, SIGMOD Conference.

[4]  John Clark,et al.  Programming the memory hierarchy revisited: supporting irregular parallelism in sequoia , 2011, PPoPP '11.

[5]  Frédo Durand,et al.  Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines , 2013, PLDI 2013.

[6]  Guy E. Blelloch,et al.  Programming parallel algorithms , 1996, CACM.

[7]  Seth Copen Goldstein,et al.  Design and Implementation of a Multithreaded Virtual Machine for Executing Linear Logic Programs , 2014, PPDP '14.

[8]  Alexander Aiken,et al.  Legion: Expressing locality and independence with logical regions , 2012, 2012 International Conference for High Performance Computing, Networking, Storage and Analysis.

[9]  Vivek Sarkar,et al.  X10: an object-oriented approach to non-uniform cluster computing , 2005, OOPSLA '05.

[10]  Seth Copen Goldstein,et al.  A Linear Logic Programming Language for Concurrent Programming over Graph Structures , 2014, Theory and Practice of Logic Programming.

[11]  Farhad Arbab,et al.  Coordination Models and Languages , 1998, Adv. Comput..

[12]  Keshav Pingali,et al.  Synthesizing concurrent schedulers for irregular algorithms , 2011, ASPLOS XVI.

[13]  Joseph M. Hellerstein,et al.  GraphLab: A New Framework For Parallel Machine Learning , 2010, UAI.

[14]  Dave Clarke,et al.  Coordination: Reo, Nets, and Logic , 2007, FMCO.

[15]  Vijay Saraswat Xerox Higher-order , linear , concurrent constraint programming , 1992 .

[16]  Seth Copen Goldstein,et al.  A Language for Large Ensembles of Independently Executing Nodes , 2009, ICLP.

[17]  Ravi Kumar,et al.  Pig latin: a not-so-foreign language for data processing , 2008, SIGMOD Conference.

[18]  Dale Miller,et al.  An Overview of Linear Logic Programming , 2003 .

[19]  Dale Miller,et al.  Logic Programming in a Fragment of Intuitionistic Linear Logic , 1994, Inf. Comput..

[20]  Radha Jagadeesan,et al.  Testing Concurrent Systems: An Interpretation of Intuitionistic Logic , 2005, FSTTCS.

[21]  Keshav Pingali,et al.  The tao of parallelism in algorithms , 2011, PLDI '11.

[22]  Radha Jagadeesan,et al.  Default timed concurrent constraint programming , 1995, POPL '95.

[23]  Björn Victor,et al.  On the Expressiveness of Linearity vs Persistence in the Asychronous Pi-Calculus , 2006, 21st Annual IEEE Symposium on Logic in Computer Science (LICS'06).

[24]  Patrick Lincoln,et al.  Linear logic , 1992, SIGA.

[25]  Manuel M. T. Chakravarty,et al.  Nepal - Nested Data Parallelism in Haskell , 2001, Euro-Par.

[26]  Michael I. Jordan,et al.  Loopy Belief Propagation for Approximate Inference: An Empirical Study , 1999, UAI.

[27]  Rishiyur S. Nikhil An Overview of the Parallel Language Id (a foundation for pH, a parallel dialect of Haskell) , 1993 .

[28]  P. Hanrahan,et al.  Sequoia: Programming the Memory Hierarchy , 2006, ACM/IEEE SC 2006 Conference (SC'06).

[29]  Guy E. Blelloch,et al.  Ligra: a lightweight graph processing framework for shared memory , 2013, PPoPP '13.

[30]  Farhad Arbab,et al.  Reo: A Channel-based Coordination Model for Component Composition , 2005 .

[31]  Nicholas Carriero,et al.  Linda and Friends , 1986, Computer.

[32]  Aart J. C. Bik,et al.  Pregel: a system for large-scale graph processing , 2010, SIGMOD Conference.

[33]  Yuan Yu,et al.  Dryad: distributed data-parallel programs from sequential building blocks , 2007, EuroSys '07.

[34]  E. J. Hoffman,et al.  Constructions for the Solution of the m Queens Problem , 1969 .

[35]  Sanjay Ghemawat,et al.  MapReduce: Simplified Data Processing on Large Clusters , 2004, OSDI.

[36]  Joseph Gonzalez,et al.  Residual Splash for Optimally Parallelizing Belief Propagation , 2009, AISTATS.

[37]  Flavio Cruz,et al.  Linear Logic and Coordination for Parallel Programming , 2015 .