Activation Function Architectures for FPGAs
暂无分享,去创建一个
[1] Florent de Dinechin,et al. Certifying the Floating-Point Implementation of an Elementary Function Using Gappa , 2011, IEEE Transactions on Computers.
[2] Martin Langhammer,et al. Floating-Point DSP Block Architecture for FPGAs , 2015, FPGA.
[3] Bogdan Pasca. Correctly rounded floating-point division for DSP-enabled FPGAs , 2012, 22nd International Conference on Field Programmable Logic and Applications (FPL).
[4] Bogdan Pasca,et al. Single Precision Logarithm and Exponential Architectures for Hard Floating-Point Enabled FPGAs , 2017, IEEE Transactions on Computers.
[5] Lei Zhang,et al. Implementation of Fixed-point Neuron Models with Threshold, Ramp and Sigmoid Activation Functions , 2017 .
[6] Nicolas Brisebarre,et al. Efficient polynomial L-approximations , 2007, 18th IEEE Symposium on Computer Arithmetic (ARITH '07).
[7] J. Harrison,et al. Efficient and accurate computation of upper bounds of approximation errors , 2011, Theor. Comput. Sci..
[8] Zbigniew Hajduk,et al. High accuracy FPGA activation function implementation for neural networks , 2017, Neurocomputing.
[9] Nilay Khare,et al. Hardware implementation of neural network with Sigmoidal activation functions using CORDIC , 2015, Microprocess. Microsystems.
[10] Martin Langhammer,et al. Faithful single-precision floating-point tangent for FPGAs , 2013, FPGA '13.
[11] John Harrison,et al. A Machine-Checked Theory of Floating Point Arithmetic , 1999, TPHOLs.
[12] Florent de Dinechin,et al. Floating-point exponential functions for DSP-enabled FPGAs , 2010, 2010 International Conference on Field-Programmable Technology.
[13] Christos-Savvas Bouganis,et al. A scalable FPGA architecture for non-linear SVM training , 2008, 2008 International Conference on Field-Programmable Technology.