A Programmable Neural-Network Inference Accelerator Based on Scalable In-Memory Computing
暂无分享,去创建一个
Hongyang Jia | Yinqi Tang | Hossein Valavi | Naveen Verma | Murat Ozatay | Rakshit Pathak | Jinseok Lee | N. Verma | Hossein Valavi | Hongyang Jia | Yinqi Tang | Murat Ozatay | Rakshit Pathak | Jinseok Lee
[1] Meng-Fan Chang,et al. 14.3 A 65nm Computing-in-Memory-Based CNN Processor with 2.9-to-35.8TOPS/W System Energy Efficiency Using Dynamic-Sparsity Performance-Scaling Architecture and Energy-Efficient Inter/Intra-Macro Data Reuse , 2020, 2020 IEEE International Solid- State Circuits Conference - (ISSCC).
[2] Vivienne Sze,et al. Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks , 2017, IEEE Journal of Solid-State Circuits.
[3] Xi Chen,et al. A 5.1pJ/Neuron 127.3us/Inference RNN-based Speech Recognition Processor using 16 Computing-in-Memory SRAM Macros in 65nm CMOS , 2019, 2019 Symposium on VLSI Circuits.
[4] David Blaauw,et al. 14.2 A Compute SRAM with Bit-Serial Integer/Floating-Point Operations for Programmable In-Memory Vector Acceleration , 2019, 2019 IEEE International Solid- State Circuits Conference - (ISSCC).
[5] Yinqi Tang,et al. A Programmable Heterogeneous Microprocessor Based on Bit-Scalable In-Memory Computing , 2020, IEEE Journal of Solid-State Circuits.
[6] Marian Verhelst,et al. An always-on 3.8μJ/86% CIFAR-10 mixed-signal binary CNN processor with all memory on chip in 28nm CMOS , 2018, 2018 IEEE International Solid - State Circuits Conference - (ISSCC).
[7] Hossein Valavi,et al. A 64-Tile 2.4-Mb In-Memory-Computing CNN Accelerator Employing Charge-Domain Compute , 2019, IEEE Journal of Solid-State Circuits.
[8] Rong Jin,et al. 7.2 A 12nm Programmable Convolution-Efficient Neural-Processing-Unit Chip Achieving 825TOPS , 2020, 2020 IEEE International Solid- State Circuits Conference - (ISSCC).