System-Level Benchmarking of Chiplet-based IMC Architectures for Deep Neural Network Acceleration
暂无分享,去创建一个
In-memory computing (IMC) on a large monolithic chip for deep learning faces area, yield, and fabrication cost challenges due to the ever-increasing model sizes. 2.5D or chiplet-based architectures integrate multiple small chiplets to form a large computing system, presenting a feasible solution to accelerate large deep learning models. In this work, we present a novel benchmarking tool, SIAM, to evaluate the performance of chiplet-based IMC architectures and explore different architectural configurations. SIAM integrates device, circuit, architecture, network-on-chip (NoC), network-on-package (NoP), and DRAM access models to benchmark an end-to-end system. SIAM supports multiple deep neural networks (DNNs), different architectural configurations, and efficient design space exploration. We demonstrate the effectiveness of SIAM by benchmarking state-of-the-art DNNs across different datasets.