Asymptotic Inferential Capabilities of Feed-Forward Neural Networks

We analyse how the inferential capabilities of feed-forward neural networks change when the complexity of the internal connection pattern is allowed to grow either expanding the number of layers or increasing the number of neurons in a hidden layer. We obtain the learning curves for all the Boolean functions that are represented by some small networks, either through an exhaustive enumeration of all possible synaptic matrices or by a Monte Carlo sampling of the synaptic space. We show that, in spite of the fact that the learning curves are not universal (in the sense that networks with different number of neurons have not the same learning curves), an asymptotic behaviour emerges as more hidden layers or more neurons in a given hidden layer are added. This means that, beyond a minimum complexity in the connection pattern, no significant changes are produced in the network considered as an inferential system. Our results explain, in a very speculative way, why mammal neocortex evolved expanding its surface instead of increasing its thickness.