MRAM Co-designed Processing-in-Memory CNN Accelerator for Mobile and IoT Applications

We designed a device for Convolution Neural Network applications with non-volatile MRAM memory and computing-in-memory co-designed architecture. It has been successfully fabricated using 22nm technology node CMOS Si process. More than 40MB MRAM density with 9.9TOPS/W are provided. It enables multiple models within one single chip for mobile and IoT device applications.

[1]  Andrew D Kent,et al.  A new spin on magnetic memories. , 2015, Nature nanotechnology.

[2]  Leon O. Chua,et al.  Cellular neural networks: applications , 1988 .

[3]  Florence March,et al.  2016 , 2016, Affair of the Heart.

[4]  Lin Yang,et al.  Ultra Power-Efficient CNN Domain Specific Accelerator with 9.3TOPS/Watt for Mobile and Embedded Applications , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[5]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[6]  Yann LeCun,et al.  Convolutional networks and applications in vision , 2010, Proceedings of 2010 IEEE International Symposium on Circuits and Systems.

[7]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[8]  Lin Yang,et al.  Super Characters: A Conversion from Sentiment Classification to Image Classification , 2018, WASSA@EMNLP.

[9]  Kimihiro Satoh,et al.  STT-MRAM for Embedded Memory Applications , 2020, 2020 IEEE International Memory Workshop (IMW).

[10]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).