Continuity of Approximation by Neural Networks in Lp Spaces

Devices such as neural networks typically approximate the elements of some function space X by elements of a nontrivial finite union M of finite-dimensional spaces. It is shown that if X=Lp(Ω) (1<p<∞ and Ω⊂Rd), then for any positive constant Γ and any continuous function φ from X to M, ‖f−φ(f)‖>‖f−M‖+Γ for some f in X. Thus, no continuous finite neural network approximation can be within any positive constant of a best approximation in the Lp-norm.