Combination of fast and slow learning neural networks for quick adaptation and pruning redundant cells

One advantage of the neural network approach is the learning of many instances with a small number of hidden units. However, the small size of neural networks usually necessitates many repeats of the gradient descent algorithm for the learning. To realize quick adaptation of the small size of neural networks, the paper presents a learning system consisting of several neural networks: a fast-learning network (F-Net), a slow-learning network (S-Net) and a main network (Main-Net). The F-Net learns new instances very quickly like k-nearest neighbors, while the S-Net learns the output of the F-Net with a small number of hidden units. The resultant parameter of the S-Net is moved to the Main-Net, which is only for recognition. During the learning of the S-Net, the system does not learn any new instances like the sleeping biological systems.