An Efficient Hardware Architecture for a Neural Network Activation Function Generator

This paper proposes an efficient hardware architecture for a function generator suitable for an artificial neural network (ANN). A spline-based approximation function is designed that provides a good trade-off between accuracy and silicon area, whilst also being inherently scalable and adaptable for numerous activation functions. This has been achieved by using a minimax polynomial and through optimal placement of the approximating polynomials based on the results of a genetic algorithm. The approximation error of the proposed method compares favourably to all related research in this field. Efficient hardware multiplication circuitry is used in the implementation, which reduces the area overhead and increases the throughput.

[1]  D. J. Myers,et al.  Efficient implementation of piecewise linear activation function for digital VLSI neural networks , 1989 .

[2]  C. Alippi,et al.  Simple approximation of sigmoidal functions: realistic design of digital neural networks capable of learning , 1991, 1991., IEEE International Sympoisum on Circuits and Systems.

[3]  Jenq-Neng Hwang,et al.  Finite Precision Error Analysis of Neural Network Hardware Implementations , 1993, IEEE Trans. Computers.

[4]  I. Koren Computer arithmetic algorithms , 2018 .

[5]  S. Hyakin,et al.  Neural Networks: A Comprehensive Foundation , 1994 .

[6]  Bernard Widrow,et al.  Neural networks: applications in industry, business and science , 1994, CACM.

[7]  K. M. Curtis,et al.  Piecewise linear approximation applied to nonlinear function of a neural network , 1997 .

[8]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[9]  Gert Cauwenberghs,et al.  Learning on Silicon: Adaptive VLSI Neural Systems , 1999 .

[10]  Stamatis Vassiliadis,et al.  Elementary function generators for neural-network emulators , 2000, IEEE Trans. Neural Networks Learn. Syst..

[11]  Kamel Besbes,et al.  Digital hardware implementation of sigmoid function and its derivative for artificial neural networks , 2001, ICM 2001 Proceedings. The 13th International Conference on Microelectronics..

[12]  Sankar K. Pal,et al.  Neural Networks and Systolic Array Design , 2002 .

[13]  Ulrich Rückert,et al.  ULSI Architectures for Artificial Neural Networks , 2001, IEEE Micro.

[14]  Sorin Draghici,et al.  On the capabilities of neural networks using limited precision weights , 2002, Neural Networks.

[15]  Matti Tommiska,et al.  Efficient digital implementation of the sigmoid function for reprogrammable logic , 2003 .

[16]  J. M. Tarela,et al.  Approximation of sigmoid function and the derivative for hardware implementation of artificial neurons , 2004 .

[17]  Javier D. Bruguera,et al.  High-speed function approximation using a minimax quadratic interpolator , 2005, IEEE Transactions on Computers.