A 2.75-to-75.9TOPS/W Computing-in-Memory NN Processor Supporting Set-Associate Block-Wise Zero Skipping and Ping-Pong CIM with Simultaneous Computation and Weight Updating
暂无分享,去创建一个
Nan Sun | Huazhong Yang | Yongpan Liu | Zhe Yuan | Xiaoyu Feng | Jian-Wei Su | Jinshan Yue | Yifan He | Yuxuan Huang | Mingtao Zhan | Meng-Fan Chang | Xueqing Li | Jiaxin Liu | Yipeng Wang | Yen-Lin Chung | Ping-Chun Wu | Li-Yang Hung | Huazhong Yang | Yongpan Liu | Zhe Yuan | Meng-Fan Chang | Xueqing Li | Jian-Wei Su | Yipeng Wang | Jinshan Yue | Nan Sun | Jiaxin Liu | Yifan He | Yuxuan Huang | Xiaoyu Feng | P. Wu | Yen-Lin Chung | Mingtao Zhan | Li-Yang Hung
[1] Meng-Fan Chang,et al. 14.3 A 65nm Computing-in-Memory-Based CNN Processor with 2.9-to-35.8TOPS/W System Energy Efficiency Using Dynamic-Sparsity Performance-Scaling Architecture and Energy-Efficient Inter/Intra-Macro Data Reuse , 2020, 2020 IEEE International Solid- State Circuits Conference - (ISSCC).
[2] Meng-Fan Chang,et al. 15.5 A 28nm 64Kb 6T SRAM Computing-in-Memory Macro with 8b MAC Operation for AI Edge Chips , 2020, 2020 IEEE International Solid- State Circuits Conference - (ISSCC).
[3] Shih-Chieh Chang,et al. 15.2 A 28nm 64Kb Inference-Training Two-Way Transpose Multibit 6T SRAM Compute-in-Memory Macro for AI Edge Chips , 2020, 2020 IEEE International Solid- State Circuits Conference - (ISSCC).