Multilayer Neural Networks and Backpropagation

A computationally effective method for training the multilayer perceptrons is the backpropagation algorithm, which is regarded as a landmark in the development of neural network. This chapter presents two different learning methods, batch learning and online learning, on the basis of how the supervised learning of the multilayer perceptron is actually performed. The essence of backpropagation learning is to encode an input-output mapping into the synaptic weights and thresholds of a multilayer perceptron. It is hoped that the network becomes well trained so that it learns enough about the past to generalize to the future. The chapter concludes with cross-validation and generalization. Cross-validation is appealing particularly when people have to design a large neural network with good generalization as the goal in different ways. Generalization is assumed that the test data are drawn from the same population used to generate the training data.