Using Floating-Gate Memory to Train Ideal Accuracy Neural Networks
暂无分享,去创建一个
Matthew J. Marinella | Sapan Agarwal | Alexander H. Hsia | Diana Garland | John Niroula | Robin B. Jacobs-Gedrim | Alex Hsia | Michael S. Van Heukelom | Elliot Fuller | Bruce Draper | M. Marinella | S. Agarwal | E. Fuller | B. Draper | R. Jacobs-Gedrim | M. V. Van Heukelom | D. Garland | J. Niroula
[1] Jennifer Hasler,et al. Vector-Matrix Multiply and Winner-Take-All as an Analog Classifier , 2014, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.
[2] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[3] Sapan Agarwal,et al. Li‐Ion Synaptic Transistor for Low Power Analog Computing , 2017, Advanced materials.
[4] Ojas Parekh,et al. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding , 2016, Front. Neurosci..
[5] G. W. Burr,et al. Experimental demonstration and tolerancing of a large-scale neural network (165,000 synapses), using phase-change memory as the synaptic weight element , 2015, 2014 IEEE International Electron Devices Meeting.
[6] Eric Beyne,et al. Ultra-Fine Pitch 3D Integration Using Face-to-Face Hybrid Wafer Bonding Combined with a Via-Middle Through-Silicon-Via Process , 2016, 2016 IEEE 66th Electronic Components and Technology Conference (ECTC).
[7] F. Merrikh Bayat,et al. Fast, energy-efficient, robust, and reproducible mixed-signal neuromorphic classifier based on embedded NOR flash memory technology , 2017, 2017 IEEE International Electron Devices Meeting (IEDM).
[8] Dong-Hyun Kim,et al. High-speed and logic-compatible split-gate embedded flash on 28-nm low-power HKMG logic process , 2017, 2017 Symposium on VLSI Technology.
[9] Steven J. Plimpton,et al. Multiscale Co-Design Analysis of Energy, Latency, Area, and Accuracy of a ReRAM Analog Neural Training Accelerator , 2017, IEEE Journal on Emerging and Selected Topics in Circuits and Systems.
[10] Shimeng Yu,et al. Three-Dimensional nand Flash for Vector–Matrix Multiplication , 2019, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.
[11] M. Marinella,et al. A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing. , 2017, Nature materials.
[12] Steven J. Plimpton,et al. Achieving ideal accuracies in analog neuromorphic computing using periodic carry , 2017, 2017 Symposium on VLSI Technology.
[13] Steven J. Plimpton,et al. Resistive memory device requirements for a neural algorithm accelerator , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).
[14] Tetsuo Endoh,et al. Design impacts on NAND Flash memory core circuits with vertical MOSFETs , 2010, 2010 IEEE International Memory Workshop.
[15] Yusuf Leblebici,et al. Improved Deep Neural Network Hardware-Accelerators Based on Non-Volatile-Memory: The Local Gains Technique , 2017, 2017 IEEE International Conference on Rebooting Computing (ICRC).
[16] Hideto Hidaka,et al. A 28 nm Embedded Split-Gate MONOS (SG-MONOS) Flash Macro for Automotive Achieving 6.4 GB/s Read Throughput by 200 MHz No-Wait Read Operation and 2.0 MB/s Write Throughput at Tj of 170$^{\circ}$ C , 2016, IEEE Journal of Solid-State Circuits.
[17] Jonathan A. Cox,et al. A Signal Processing Approach for Cyber Data Classification with Deep Neural Networks , 2015, Complex Adaptive Systems.
[18] Pritish Narayanan,et al. Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element , 2014, IEEE Transactions on Electron Devices.