A Real-Time Learning Algorithm for Two-Hidden-Layer Feedforward Networks

In some practical applications, the request of time complexity is more rigidder than space complexity. However, current neural networks seem far from the stardard of real-time applications. In the previous paper of Huang[1], it has been proved in a novel constructive method that two-hidden-layer feedforward networks (TLFNs) with 2√(m+2)N(«N) hidden neurons can learn any N distinct samples (Xi,ti) with any arbitrarily small error, where m in the required number of output neurons. On the theoritical basis of previous results[1], this paper will introduce an improved constructive method of TLFN with real-time learning capacity. The results shown in this paper will prove that both the training and generalization errors of the new TLFN can reach arbitrarily small values if sufficient distinctive training samples are provided. Additionally, this paper will use some experimental results to show the comparison of learning time with traditional gradient descent based learning. methods such as back-propogation (BP) algorithm. The learning algorithm for two-hidden-layer feedforward neural net-works is able to learn any set of oberservations just in one short iteration (one instead of large number of learning epoches) with acceptable learning and testing accuracy