Combination of fast and slow learning neural networks for quick adaptation and pruning redundant cells
暂无分享,去创建一个
One advantage of the neural network approach is the learning of many instances with a small number of hidden units. However, the small size of neural networks usually necessitates many repeats of the gradient descent algorithm for the learning. To realize quick adaptation of the small size of neural networks, the paper presents a learning system consisting of several neural networks: a fast-learning network (F-Net), a slow-learning network (S-Net) and a main network (Main-Net). The F-Net learns new instances very quickly like k-nearest neighbors, while the S-Net learns the output of the F-Net with a small number of hidden units. The resultant parameter of the S-Net is moved to the Main-Net, which is only for recognition. During the learning of the S-Net, the system does not learn any new instances like the sleeping biological systems.
[1] Y Lu,et al. A Sequential Learning Scheme for Function Approximation Using Minimal Radial Basis Function Neural Networks , 1997, Neural Computation.
[2] F. Girosi,et al. Networks for approximation and learning , 1990, Proc. IEEE.
[3] John C. Platt. A Resource-Allocating Network for Function Interpolation , 1991, Neural Computation.
[4] George A. Christos,et al. Investigation of the Crick-Mitchison reverse-learning dream sleep hypothesis in a dynamical setting , 1996, Neural Networks.