A Survey of Deep Neural Networks: Deployment Location and Underlying Hardware

This survey paper overviews the landscape of emerging deep neural networks (neural networks for deep analytics) and explores what type of underlying hardware is likely to be used at various deployment locations: in dew, fog, and cloud computing (dew computing is performed by edge devices). The paper discusses how different architecture approaches could be used on different deployment locations, for implementing deep neural networks. These include multicore processors, manycore processors, field programmable gate arrays, and application specific integrated circuits. The classification proposed in this paper divides the existing solutions into twelve different categories. Our two-dimensional classification enables comparing the existing architectures, which are predominantly cloud based, and anticipated future architectures, which are expected to be hybrid cloud-fog-dew architectures for internet of things applications. This classification enables its users to make trade-offs between data processing bandwidth, data processing latency, and power consumption.

[1]  Xavier Masip-Bruin,et al.  Fog-to-cloud Computing (F2C): The key technology enabler for dependable e-health services deployment , 2016, 2016 Mediterranean Ad Hoc Networking Workshop (Med-Hoc-Net).

[2]  Rajkumar Buyya,et al.  Next generation cloud computing: New trends and research directions , 2017, Future Gener. Comput. Syst..

[3]  Z. Irani,et al.  Critical analysis of Big Data challenges and analytical methods , 2017 .

[4]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[5]  Weisong Shi,et al.  The Promise of Edge Computing , 2016, Computer.

[6]  Siu-Ming Yiu,et al.  Multi-key privacy-preserving deep learning in cloud computing , 2017, Future Gener. Comput. Syst..

[7]  H. T. Kung,et al.  Distributed Deep Neural Networks Over the Cloud, the Edge and End Devices , 2017, 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS).

[8]  Bernard Brezzo,et al.  TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip , 2015, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[9]  Jiangchuan Liu,et al.  When deep learning meets edge computing , 2017, 2017 IEEE 25th International Conference on Network Protocols (ICNP).

[10]  Eriko Nurvitadhi,et al.  Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Neural Networks? , 2017, FPGA.

[11]  Nikko Strom,et al.  Scalable distributed DNN training using commodity GPU cloud computing , 2015, INTERSPEECH.

[12]  Song Han,et al.  Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.

[13]  Ali R. Hurson,et al.  Chapter Five - A Novel Infrastructure for Synergistic Dataflow Research, Development, Education, and Deployment: The Maxeler AppGallery Project , 2017, Adv. Comput..

[14]  Yu Wang,et al.  A Survey of FPGA-Based Neural Network Accelerator , 2017, 1712.08934.

[15]  Xavier Masip-Bruin,et al.  What is a Fog Node A Tutorial on Current Concepts towards a Common Definition , 2016, ArXiv.

[16]  Teruo Higashino,et al.  Edge-centric Computing: Vision and Challenges , 2015, CCRV.

[17]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[18]  Chia Yee Ooi,et al.  hpFog: A FPGA-Based Fog Computing Platform , 2017, 2017 International Conference on Networking, Architecture, and Storage (NAS).

[19]  Athanasios V. Vasilakos,et al.  Machine learning on big data: Opportunities and challenges , 2017, Neurocomputing.

[20]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[21]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.