Influence of Introducing an Additional Hidden Layer on the Character Recognition Capability of a BP Neural Network having One Hidden Layer

Objective of this paper is to study the character recognition capability of feed-forward back-propagation algorithm using more than one hidden layer. This analysis was conducted on 182 different letters from English alphabet. After binarization, these characters were clubbed together to form training patterns for the neural network. Network was trained to learn its behavior by adjusting the connection strengths on every iteration. The conjugate gradient descent of each presented training pattern was calculated to identify the minima on the error surface for each training pattern. Experiments were performed by using one and two hidden layers and the results revealed that as the number of hidden layers is increased, a lower final mean square error is achieved in large number of epochs and the performance of the neural network was observed to be more accurate.