Which classes of functions can a given multilayer perceptron approximate?

Given a multilayer perceptron (MLP), there are functions that can be approximated up to any degree of accuracy by the MLP without having to increase the number of the hidden nodes. Those functions belong to the closure F of the set F of the maps realizable by the MLP. In the paper, we give a list of maps with this property. In particular, it is proven that rationale belongs to F for networks with arctangent activation function and exponential belongs to F for networks with sigmoid activation function. Moreover, for a restricted class of MLPs, we prove that the list is complete and give an analytic definition of F.