In some practical applications, the request of time complexity is more rigidder than space complexity. However, current neural networks seem far from the stardard of real-time applications. In the previous paper of Huang[1], it has been proved in a novel constructive method that two-hidden-layer feedforward networks (TLFNs) with 2√(m+2)N(«N) hidden neurons can learn any N distinct samples (Xi,ti) with any arbitrarily small error, where m in the required number of output neurons. On the theoritical basis of previous results[1], this paper will introduce an improved constructive method of TLFN with real-time learning capacity. The results shown in this paper will prove that both the training and generalization errors of the new TLFN can reach arbitrarily small values if sufficient distinctive training samples are provided. Additionally, this paper will use some experimental results to show the comparison of learning time with traditional gradient descent based learning. methods such as back-propogation (BP) algorithm. The learning algorithm for two-hidden-layer feedforward neural net-works is able to learn any set of oberservations just in one short iteration (one instead of large number of learning epoches) with acceptable learning and testing accuracy
[1]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[2]
Alessandro Sperduti,et al.
Speed up learning and network optimization with extended back propagation
,
1993,
Neural Networks.
[3]
Shixin Cheng,et al.
Dynamic learning rate optimization of the backpropagation algorithm
,
1995,
IEEE Trans. Neural Networks.
[4]
Guang-Bin Huang,et al.
Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions
,
1998,
IEEE Trans. Neural Networks.
[5]
Guang-Bin Huang,et al.
Learning capability and storage capacity of two-hidden-layer feedforward networks
,
2003,
IEEE Trans. Neural Networks.
[6]
Mohamed Najim,et al.
A fast feedforward training algorithm using a modified form of the standard backpropagation algorithm
,
2001,
IEEE Trans. Neural Networks.
[7]
Amir F. Atiya,et al.
An accelerated learning algorithm for multilayer perceptron networks
,
1994,
IEEE Trans. Neural Networks.