Heterogeneous metaprogramming
暂无分享,去创建一个
This dissertation describes new language features and techniques for program generation that open the use of high-level languages for programming low-level devices. Existing work in the programming languages community focuses on the theoretical properties of metaprogramming languages—languages with explicit support for writing programs that generate programs. We focus on building real systems targeting two types of special-purpose device—sensor nodes and GPUs. Though we would like to gain the abstraction and safety benefits of a high-level language like Haskell when programming these systems, device constraints limit us to using low-level languages. We show how to recover some of the benefits of a high-level language in these domains by using Haskell to generate programs in low-level languages— heterogeneous metaprogramming.
We make the following contributions. First, we provide an implementation of quasiquotation that is now shipped as part of GHC, the de facto Haskell compiler. Quasiquotation allows Haskell expressions and patterns to be constructed using domain specific, programmer-defined concrete syntax. Complementing our implementation, we give a new technique for leveraging generic programming to transform an existing parser into a full quasiquoter with minimal changes. Second is Flask, a library that uses Haskell in a new way to capture reusable, higher-order dataflow patterns in sensor network programs that are parameterized by low-level code. We demonstrate code savings resulting from Flask's abstraction abilities through re-implementing a large, deployed sensor network application in Flask, and we show that Flask scales to run on an actual sensor network of 160 nodes. Third is Nikola, which integrates GPU computation and CPU computation by automatically compiling a subset of Haskell, requiring only minimal syntactic changes, to GPU binary code. Our translation relies on a new technique for avoiding code explosion in embedded languages and needs no compiler modifications. Our benchmarks demonstrate that for certain computations, implementations in Haskell perform as well as hand-written CUDA.