Evaluation of an Analog Accelerator for Linear Algebra

Due to the end of supply voltage scaling and the increasing percentage of dark silicon in modern integrated circuits, researchers are looking for new scalable ways to get useful computation from existing silicon technology. In this paper we present a reconfigurable analog accelerator for solving systems of linear equations. Commonly perceived downsides of analog computing, such as low precision and accuracy, limited problem sizes, and difficulty in programming are all compensated for using methods we discuss. Based on a prototyped analog accelerator chip we compare the performance and energy consumption of the analog solver against an efficient digital algorithm running on a CPU, and find that the analog accelerator approach may be an order of magnitude faster and provide one third energy savings, depending on the accelerator design. Due to the speed and efficiency of linear algebra algorithms running on digital computers, an analog accelerator that matches digital performance needs a large silicon footprint. Finally, we conclude that problem classes outside of systems of linear equations may hold more promise for analog acceleration.

[1]  Steve Mullett,et al.  Read the fine print. , 2009, RN.

[2]  Sihwan Kim,et al.  A Programmable and Configurable Mixed-Mode FPAA SoC , 2016, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[3]  G.E.R. Cowan,et al.  A VLSI analog computer/digital computer accelerator , 2006, IEEE Journal of Solid-State Circuits.

[4]  Simha Sethumadhavan,et al.  Continuous-time hybrid computation with programmable nonlinearities , 2015, ESSCIRC Conference 2015 - 41st European Solid-State Circuits Conference (ESSCIRC).

[5]  J. Shewchuk An Introduction to the Conjugate Gradient Method Without the Agonizing Pain , 1994 .

[6]  Yannis P. Tsividis,et al.  A Clockless ADC/DSP/DAC System with Activity-Dependent Power Dissipation and No Aliasing , 2008, 2008 IEEE International Solid-State Circuits Conference - Digest of Technical Papers.

[7]  Yunong Zhang,et al.  Revisit the Analog Computer and Gradient-Based Neural System for Matrix Inversion , 2005, Proceedings of the 2005 IEEE International Symposium on, Mediterrean Conference on Control and Automation Intelligent Control, 2005..

[8]  Willard L. Miranker,et al.  Fast Hybrid Solution of Algebraic Systems , 1990, SIAM J. Sci. Comput..

[9]  Luis Ceze,et al.  General-purpose code acceleration with limited-precision analog computation , 2014, 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA).

[10]  Simha Sethumadhavan,et al.  Energy-Efficient Hybrid Analog/Digital Approximate Computation in Continuous Time , 2016, IEEE Journal of Solid-State Circuits.

[11]  Geoff Hannington,et al.  A Floating-Point Multiplexed DDA System , 1976, IEEE Transactions on Computers.

[12]  Karthikeyan Sankaralingam,et al.  Dark Silicon and the End of Multicore Scaling , 2012, IEEE Micro.

[13]  Shuzhi Sam Ge,et al.  Design and analysis of a general recurrent neural network model for time-varying matrix inversion , 2005, IEEE Transactions on Neural Networks.

[14]  Aude Maignan,et al.  Hybrid computation , 2001, ISSAC '01.

[15]  G.E.R. Cowan,et al.  A VLSI analog computer/math co-processor for a digital computer , 2005, ISSCC. 2005 IEEE International Digest of Technical Papers. Solid-State Circuits Conference, 2005..

[16]  B. R. Wilkins,et al.  Analogue and iterative methods in computation, simulation, and control , 1970 .

[17]  J. L. Elshoff,et al.  The Binary Floating Point Digital Differential Analyzer , 1899 .

[18]  Jia Wang,et al.  DaDianNao: A Machine-Learning Supercomputer , 2014, 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture.

[19]  Lawrence P. McNamee,et al.  Iterative Solution of Large-Scale Systems by Hybrid Techniques , 1970, IEEE Transactions on Computers.

[20]  Granino A. Korn,et al.  Electronic analog and hybrid computers , 1952 .

[21]  Stanley Fifer Analogue computation : Theory, techniques and applications , 1961 .

[22]  Paul T. Hulina,et al.  The binary floating point digital differential analyzer , 1970, AFIPS '70 (Fall).

[23]  Robert B. McGhee,et al.  The Extended Resolution Digital Differential Analyzer: A New Computing Structure for Solving Differential Equations , 1970, IEEE Transactions on Computers.

[24]  W. Karplus,et al.  Analog simulation : solution of field problems , 1961 .

[25]  Bruce J. MacLennan,et al.  Analog Computation , 2009, Encyclopedia of Complexity and Systems Science.

[26]  Simha Sethumadhavan,et al.  A Case for Hybrid Discrete-Continuous Architectures , 2012, IEEE Computer Architecture Letters.

[27]  William J. Dally,et al.  GPUs and the Future of Parallel Computing , 2011, IEEE Micro.

[28]  Andrew S. Cassidy,et al.  Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).

[29]  Mikko H. Lipasti,et al.  BenchNN: On the broad potential application scope of hardware neural network accelerators , 2012, 2012 IEEE International Symposium on Workload Characterization (IISWC).

[30]  Michael Bedford Taylor,et al.  Is dark silicon useful? Harnessing the four horsemen of the coming dark silicon apocalypse , 2012, DAC Design Automation Conference 2012.

[31]  Walter J. Karplus,et al.  Increasing Digital Computer Efficiency with the Aid of Error-Correcting Analog Subroutines , 1971, IEEE Transactions on Computers.

[32]  A. Harrow,et al.  Quantum algorithm for linear systems of equations. , 2008, Physical review letters.

[33]  Samuel Williams,et al.  The Landscape of Parallel Computing Research: A View from Berkeley , 2006 .

[34]  S. Simon Wong,et al.  24.2 A 2.5GHz 7.7TOPS/W switched-capacitor matrix multiplier with co-designed local memory in 40nm , 2016, 2016 IEEE International Solid-State Circuits Conference (ISSCC).