On the Realization of a Kolmogorov Network

with Q 2 2N + 1 provides theoretical support for neural networks that implement multivariate mappings (Hecht-Nielson 1987; Lippmann 1987). Girosi and Poggio (1989) criticized Kolmogorov's theorem as irrelevant. They based their criticism mainly on the fact that the inner functions p,,. are highly nonsmooth and the output functions gq are not in a parameterized form. However, this criticism was not convincing: Kurkova (1991) argued that highly nonsmooth functions can be regarded as limits or sums of infinite series of smooth functions, and the problems in realizing a Kolmogorov network can be eliminated through approximately implementing (~4. and g, with known networks. In this note we present our view on the discussion from a more essential point of view. Since ( P, . in equation 0.1 should be universal, Kolmogorov's theorem can be regarded as a proof of a transformation of representation of multivariate functions in terms of the Q univariate output functions g,. [In some improved versions of Kolmogorov's theorem it is proved that only one g in equation 0.1 is necessary (Lorentz 19661.1 Such a strategy is embedded in the network structure as shown in Figure 1. (Note that the block T is independent off.) If Figure 1 is thought of as a general network structure for approximation of multivariate functions, a question is whether an arbitrarily given multivariate function f can be (approximately) implemented through an (approximate) implementation of the corresponding Q univariate functions g,. To this question we have an answer as stated below: