Simple approximation of sigmoidal functions: realistic design of digital neural networks capable of learning

Two different approaches to nonlinearity simplification in neural nets are presented. Both the solutions are based on approximation of the sigmoidal mapper often used in neural networks (extensions are being considered to allow approximation of a more general class of functions). In particular, a first solution yielding a very simple architecture, but involving discontinuous functions is presented; a second solution, slightly more complex, but based on a continuous function is then presented. This second solution has been successfully used in conjunction with the classical generalized delta rule algorithm.<<ETX>>