Delta Rule and Backpropagation

Assuming that the reader is already familiar with the general concept of Artificial Neural Network and with the Perceptron learning rule, this paper introduces the Delta learning rule, as a basis for the Backpropagation learning rule. After discussing the necessity of using multi-layer Artificial Neural Networks for solving non-linearly separable problems, the paper describes all the mathematical steps that allow us to pass from the simple gradient descent formulation to the Backpropagation algorithm, still nowadays one of the most used methods to train feed-forward multi-layer Artificial Neural Networks. The paper is concluded by discussing issues related to overfitting in feed-forward multi-layer Artificial Neural Networks, and by presenting some heuristics and ideas for an appropriate parameter setting.