Processing-in-memory in High Bandwidth Memory (PIM-HBM) Architecture with Energy-efficient and Low Latency Channels for High Bandwidth System
暂无分享,去创建一个
Joungho Kim | Seongguk Kim | Subin Kim | Kyungjun Cho | Daehwan Lho | Shinyoung Park | Gapyeol Park | Taein Shin | Kyungjune Son | Hyunwook Park | Joungho Kim | Kyungjun Cho | Subin Kim | Shinyoung Park | Gapyeol Park | Hyunwook Park | Kyungjune Son | Daehwan Lho | Seongguk Kim | Taein Shin
[1] Joungho Kim,et al. Signal Integrity Design and Analysis of Silicon Interposer for GPU-Memory Channels in High-Bandwidth Memory Interface , 2018, IEEE Transactions on Components, Packaging and Manufacturing Technology.
[2] Joel Emer,et al. Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks , 2016, CARN.
[3] Reum Oh,et al. 18.2 A 1.2V 20nm 307GB/s HBM DRAM with at-speed wafer-level I/O test scheme and adaptive refresh considering temperature distribution , 2016, 2016 IEEE International Solid-State Circuits Conference (ISSCC).
[4] Xi Chen,et al. Ground-referenced signaling for intra-chip and short-reach chip-to-chip interconnects , 2018, 2018 IEEE Custom Integrated Circuits Conference (CICC).
[5] Mahmut T. Kandemir,et al. Scheduling techniques for GPU architectures with processing-in-memory capabilities , 2016, 2016 International Conference on Parallel Architecture and Compilation Techniques (PACT).