An accelerated learning algorithm (ABP - adaptive backpropagation) is proposed for the supervised training of multi-layer networks. The learning algorithm is based on the principle of ”forced dynamics” for the error functional. This is accomplished by the appropriate choice of the network link weight update rule, guarantying convergence to a local minimum. The algorithm does not require use of any information from previous updates, while it requires knowledge of the exact same error gradient terms used in the standard backpropagation. The numerical simulation results indicate that there are certain advantages in using ABP. The method is consistently about an order of magnitude faster than the standard backpropagation method, and also faster than such accelerated algorithms as the wellknown quickprop. Furthermore, there is no added ”tuning” parameter, other than the learning rate, to which ABP appears to be less sensitive. However, the drawbacks of ”jumpy” behavior in the vicinity of the local minima and the inability to eventually rea.& the global minimum exists, warranting further investigation.