Implementation of DNN on a RISC-V Open Source Microprocessor for IoT devices

Logarithmic Quantization [1] and feature extraction enable us to reduce model parameters to a great extent. Based on these methods, we have implemented a small sized DNN on a RISC-V microprocessor with RAM of only 16KB. We also propose a feature extraction algorithm which outperforms the original fully connected neural network and reduces inputs by 12.25 $\times$ at the same time. MNIST [2] dataset is used as our training samples and Chainer [8] is used to train the network. As the result, we reduced weights size by nearly 86 $\times$ from 49.625KB to 0.578KB which make it possible to store these weights in arrays and load them directly into the RAM.