Approximate Computing: Making Mobile Systems More Efficient
暂无分享,去创建一个
Approximate systems can reclaim energy that's currently lost to the "correctness tax" imposed by traditional safety margins designed to prevent worst-case scenarios. Researchers at the University of Washington have co-designed programming language extensions, a compiler, and a hardware co-processor to support approximate acceleration. Their end-to-end system includes two building blocks. First, a new programmer-guided compiler framework transforms programs to use approximation in a controlled way. An Approximate C Compiler for Energy and Performance Tradeoffs (Accept) uses programmer annotations, static analysis, and dynamic profiling to find parts of a program that are amenable to approximation. Second, the compiler targets a system on a chip (SoC) augmented with a co-processor that can efficiently evaluate coarse regions of approximate code. A Systolic Neural Network Accelerator in Programmable logic (Snnap) is a hardware accelerator prototype that can efficiently evaluate approximate regions of code in a general-purpose program.
[1] Dan Grossman,et al. EnerJ: approximate data types for safe and general low-power computation , 2011, PLDI '11.
[2] Jacob Nelson,et al. SNNAP: Approximate computing on programmable SoCs via neural acceleration , 2015, 2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA).
[3] Luis Ceze,et al. Neural Acceleration for General-Purpose Approximate Programs , 2012, 2012 45th Annual IEEE/ACM International Symposium on Microarchitecture.