A study of sample size with neural network
暂无分享,去创建一个
Xue Bai | Chao-Kun Cheng | Ying-Jin Cui | S. Davis | Yingjin Cui | S. Davis | Chaowei Cheng | Xue-Fei Bai
[1] Peter L. Bartlett,et al. The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network , 1998, IEEE Trans. Inf. Theory.
[2] Ah Chung Tsoi,et al. Lessons in Neural Network Training: Overfitting May be Harder than Expected , 1997, AAAI/IAAI.
[3] Peter L. Bartlett,et al. Lower bounds on the Vapnik-Chervonenkis dimension of multi-layer threshold networks , 1993, COLT '93.
[4] O. Mangasarian,et al. Pattern Recognition Via Linear Programming: Theory and Application to Medical Diagnosis , 1989 .
[5] David Haussler,et al. Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.
[6] R. Maenner,et al. Quantifying a critical training set size for generalization and overfitting using teacher neural networks , 1995 .
[7] Michael Schmitt,et al. On the Sample Complexity for Nonoverlapping Neural Networks , 1999, Machine Learning.
[8] Kunio Doi,et al. Effect of a small number of training cases on the performance of massive training artificial neural network (MTANN) for reduction of false positives in computerized detection of lung nodules in low-dose CT , 2003, SPIE Medical Imaging.
[9] Rajkumar Thirumalainambi,et al. Training data requirement for a neural network to predict aerodynamic coefficients , 2003, SPIE Defense + Commercial Sensing.
[10] Leslie G. Valiant,et al. A general lower bound on the number of examples needed for learning , 1988, COLT '88.