Simple approximation of sigmoidal functions: realistic design of digital neural networks capable of learning
暂无分享,去创建一个
Two different approaches to nonlinearity simplification in neural nets are presented. Both the solutions are based on approximation of the sigmoidal mapper often used in neural networks (extensions are being considered to allow approximation of a more general class of functions). In particular, a first solution yielding a very simple architecture, but involving discontinuous functions is presented; a second solution, slightly more complex, but based on a continuous function is then presented. This second solution has been successfully used in conjunction with the classical generalized delta rule algorithm.<<ETX>>
[1] J. Ouali,et al. Fast Generation of Neuro-ASICs , 1990 .
[2] Mariagiovanna Sami,et al. A Compact and Fast Silicon Implementation for Layered Neural Nets , 1991 .
[3] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[4] M. Yagyu,et al. Design, fabrication and evaluation of a 5-inch wafer scale neural network LSI composed on 576 digital neurons , 1990, 1990 IJCNN International Joint Conference on Neural Networks.