We are trying to implement deep neural networks in the edge computing environment for real-world applications such as the IoT(Internet of Things), the FinTech etc., for the purpose of utilizing the significant achievement of Deep Learning in recent years. Especially, we now focus algorithm implementation on FPGA, because FPGA is one of the promising devices for low-cost and low-power implementation of the edge computer. In this work, we introduce Binary-DCGAN(B-DCGAN) - Deep Convolutional GAN model with binary weights and activations, and with using integer-valued operations in forward pass(train-time and run-time). And we show how to implement B-DCGAN on FPGA(Xilinx Zynq). Using the B-DCGAN, we do feasibility study of FPGA's characteristic and performance for Deep Learning. Because the binarization and using integer-valued operation reduce the memory capacity and the number of the circuit gates, it is very effective for FPGA implementation. On the other hand, the quality of generated data from the model will be decreased by these reductions. So we investigate the influence of these reductions.
[1]
Soumith Chintala,et al.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
,
2015,
ICLR.
[2]
Tao Zhang,et al.
A Survey of Model Compression and Acceleration for Deep Neural Networks
,
2017,
ArXiv.
[3]
Ran El-Yaniv,et al.
Binarized Neural Networks
,
2016,
NIPS.
[4]
Colin Raffel,et al.
Lasagne: First release.
,
2015
.
[5]
Yoshua Bengio,et al.
Gradient-based learning applied to document recognition
,
1998,
Proc. IEEE.
[6]
Philip Heng Wai Leong,et al.
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
,
2016,
FPGA.
[7]
John Salvatier,et al.
Theano: A Python framework for fast computation of mathematical expressions
,
2016,
ArXiv.
[8]
Maxime Pelcat,et al.
Accelerating CNN inference on FPGAs: A Survey
,
2018,
ArXiv.
[9]
Simon Osindero,et al.
Conditional Generative Adversarial Nets
,
2014,
ArXiv.