Training Feedforward Neural Networks: An Algorithm Giving Improved Generalization
暂无分享,去创建一个
[1] O. Mangasarian,et al. Multisurface method of pattern separation for medical diagnosis applied to breast cytology. , 1990, Proceedings of the National Academy of Sciences of the United States of America.
[2] Wolfram Schiffmann,et al. Comparison of optimized backpropagation algorithms , 1993, ESANN.
[3] Harris Drucker,et al. Improving generalization performance using double backpropagation , 1992, IEEE Trans. Neural Networks.
[4] Christian Lebiere,et al. The Cascade-Correlation Learning Architecture , 1989, NIPS.
[5] Jun Wang,et al. Characterization of training errors in supervised learning using gradient-based rules , 1993, Neural Networks.
[6] Ehud D. Karnin,et al. A simple procedure for pruning back-propagation trained neural networks , 1990, IEEE Trans. Neural Networks.
[7] Marie Cottrell,et al. Time series and neural: a statistical method for weight elimination , 1993, ESANN.
[8] Terrence J. Sejnowski,et al. Parallel Networks that Learn to Pronounce English Text , 1987, Complex Syst..
[9] David Haussler,et al. What Size Net Gives Valid Generalization? , 1989, Neural Computation.
[10] Petri Koistinen,et al. Using additive noise in back-propagation training , 1992, IEEE Trans. Neural Networks.
[11] Anthony N. Burkitt,et al. Optimization of the Architecture of Feed-forward Neural Networks with Hidden Layers by Unit Elimination , 1991, Complex Syst..
[12] Steve G. Romaniuk. Pruning Divide & Conquer networks , 1993 .
[13] Charles W. Lee. Learning in neural networks by using tangent planes to constraint surfaces , 1993, Neural Networks.
[14] Ryotaro Kamimura,et al. Internal representation with minimum entropy in recurrent neural networks: minimizing entropy through inhibitory connections , 1993 .