Improving generalization ability of universal learning networks with superfluous parameters
暂无分享,去创建一个
[1] Paul J. Werbos,et al. Backpropagation Through Time: What It Does and How to Do It , 1990, Proc. IEEE.
[2] F. Girosi,et al. Networks for approximation and learning , 1990, Proc. IEEE.
[3] T Poggio,et al. Regularization Algorithms for Learning That Are Equivalent to Multilayer Networks , 1990, Science.
[4] Kotaro Hirasawa,et al. Forward propagation universal learning network , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).
[5] Masumi Ishikawa,et al. Automatic task decomposition in modular networks by structural learning with forgetting , 1993, Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).
[6] Lennart Ljung,et al. Nonlinear black-box modeling in system identification: a unified overview , 1995, Autom..
[7] Kotaro Hirasawa,et al. Universal learning network and computation of its higher order derivatives , 1995, Proceedings of ICNN'95 - International Conference on Neural Networks.
[8] Kotaro Hirasawa,et al. Modeling dynamic systems using universal learning network , 1996, 1996 IEEE International Conference on Systems, Man and Cybernetics. Information Intelligence and Systems (Cat. No.96CH35929).
[9] Kotaro Hirasawa,et al. Computing Higher Order Derivatives in Universal Learning Networks , 1998, J. Adv. Comput. Intell. Intell. Informatics.
[10] Geoffrey E. Hinton. Connectionist Learning Procedures , 1989, Artif. Intell..