n-h-1 networks store no less n*h+1 examples, but sometimes no more
暂无分享,去创建一个
The author shows that an n-h-1 artificial neural network with n real inputs, a single layer of h hidden units, and one binary output unit can store correctly at least n*h+1 examples in a general position. The proof is constructive so that weights are obtained deterministically from examples. The result is thought to be a generalization of the fact that one threshold gate can remember any n+1 examples in a general position. The number obtained is a good lower bound of the network capacity and is a great improvement on the previous best bound by S. Akaho and S. Amari (1990). It is also shown that the figure nh+1 is tight in a certain sense.<<ETX>>
[1] Thomas M. Cover,et al. Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition , 1965, IEEE Trans. Electron. Comput..
[2] S. Amari,et al. On the capacity of three-layer networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.