Vapnik-Chervonenkis Dimension Bounds for Two- and Three-Layer Networks
暂无分享,去创建一个
We show that the Vapnik-Chervonenkis dimension of the class of functions that can be computed by arbitrary two-layer or some completely connected three-layer threshold networks with real inputs is at least linear in the number of weights in the network. In Valiant's "probably approximately correct" learning framework, this implies that the number of random training examples necessary for learning in these networks is at least linear in the number of weights.
[1] Leslie G. Valiant,et al. A theory of the learnable , 1984, STOC '84.
[2] David Haussler,et al. Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.
[3] Peter L. Bartlett,et al. Lower bounds on the Vapnik-Chervonenkis dimension of multi-layer threshold networks , 1993, COLT '93.
[4] David Haussler,et al. What Size Net Gives Valid Generalization? , 1989, Neural Computation.