PLAM: A Posit Logarithm-Approximate Multiplier
暂无分享,去创建一个
Nader Bagherzadeh | Guillermo Botella | Min Soo Kim | Alberto A. Del Barrio | Raul Murillo | HyunJin Kim | N. Bagherzadeh | Hyunjin Kim | Raul Murillo | G. Botella
[1] Nader Bagherzadeh,et al. Efficient Mitchell’s Approximate Log Multipliers for Convolutional Neural Networks , 2019, IEEE Transactions on Computers.
[2] Masanori Hashimoto,et al. Logarithm-approximate floating-point multiplier is applicable to power-efficient neural network training , 2020, Integr..
[3] Florent de Dinechin,et al. Evaluating the Hardware Cost of the Posit Number System , 2019, 2019 29th International Conference on Field Programmable Logic and Applications (FPL).
[4] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[5] Dhireesha Kudithipudi,et al. Deep Learning Training on the Edge with Low-Precision Posits , 2019, ArXiv.
[6] Florent de Dinechin,et al. Designing Custom Arithmetic Data Paths with FloPoCo , 2011, IEEE Design & Test of Computers.
[7] Paolo Napoletano,et al. Benchmark Analysis of Representative Deep Neural Network Architectures , 2018, IEEE Access.
[8] Sergio Saponara,et al. Fast deep neural networks for image processing using posits and ARM scalable vector extension , 2020, Journal of Real-Time Image Processing.
[9] John L. Gustafson,et al. Deep Positron: A Deep Neural Network Using the Posit Number System , 2018, 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE).
[10] Guillermo Botella Juan,et al. Deep PeNSieve: A deep learning framework based on the posit number system , 2020, Digit. Signal Process..
[11] John L. Gustafson,et al. Beating Floating Point at its Own Game: Posit Arithmetic , 2017, Supercomput. Front. Innov..
[12] Raghuraman Krishnamoorthi,et al. Quantizing deep convolutional networks for efficient inference: A whitepaper , 2018, ArXiv.
[13] Rainer Leupers,et al. Parameterized Posit Arithmetic Hardware Generator , 2018, 2018 IEEE 36th International Conference on Computer Design (ICCD).
[14] Patricio Bulic,et al. Applicability of approximate multipliers in hardware neural networks , 2012, Neurocomputing.
[15] Hayden Kwok-Hay So,et al. Universal number posit arithmetic generator on FPGA , 2018, 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE).
[16] David A. Patterson,et al. A new golden age for computer architecture , 2019, Commun. ACM.
[17] Hayden Kwok-Hay So,et al. PACoGen: A Hardware Posit Arithmetic Core Generator , 2019, IEEE Access.
[18] Guillermo Botella Juan,et al. Customized Posit Adders and Multipliers using the FloPoCo Core Generator , 2020, 2020 IEEE International Symposium on Circuits and Systems (ISCAS).
[19] Jean-Michel Muller,et al. Posits: the good, the bad and the ugly , 2019, CoNGA'19.
[20] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[21] E. V. Krishnamurthy,et al. On Computer Multiplication and Division Using Binary Logarithms , 1963, IEEE Transactions on Electronic Computers.
[22] Jeff Johnson,et al. Rethinking floating point for deep learning , 2018, ArXiv.
[23] Hayden Kwok-Hay So,et al. Architecture Generator for Type-3 Unum Posit Adder/Subtractor , 2018, 2018 IEEE International Symposium on Circuits and Systems (ISCAS).
[24] Jun Lin,et al. Evaluations on Deep Neural Networks Training Using Posit Number System , 2020, IEEE Transactions on Computers.