Asymptotic Inferential Capabilities of Feed-Forward Neural Networks
暂无分享,去创建一个
We analyse how the inferential capabilities of feed-forward neural networks change when the complexity of the internal connection pattern is allowed to grow either expanding the number of layers or increasing the number of neurons in a hidden layer. We obtain the learning curves for all the Boolean functions that are represented by some small networks, either through an exhaustive enumeration of all possible synaptic matrices or by a Monte Carlo sampling of the synaptic space. We show that, in spite of the fact that the learning curves are not universal (in the sense that networks with different number of neurons have not the same learning curves), an asymptotic behaviour emerges as more hidden layers or more neurons in a given hidden layer are added. This means that, beyond a minimum complexity in the connection pattern, no significant changes are produced in the network considered as an inferential system. Our results explain, in a very speculative way, why mammal neocortex evolved expanding its surface instead of increasing its thickness.
[1] Geoffrey E. Hinton,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..
[2] Stefano Patarnello,et al. Learning Networks of Neurons with Boolean Logic , 1987 .
[3] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[4] P. Carnevali,et al. Exhaustive Thermodynamical Analysis of Boolean Learning Networks , 1987 .
[5] C. D. Gelatt,et al. Optimization by Simulated Annealing , 1983, Science.