Multi-layer perceptron ensembles for increased performance and fault-tolerance in pattern recognition tasks

Multilayer perceptrons (MLPs) have proven to be an effective way to solve classification tasks. A major concern in their use is the difficulty to define the proper network for a specific application, due to the sensitivity to the initial conditions and to overfitting and underfitting problems which limit their generalization capability. Moreover, time and hardware constraints may seriously reduce the degrees of freedom in the search for a single optimal network. A very promising way to partially overcome such drawbacks is the use of MLP ensembles: averaging and voting techniques are largely used in classical statistical pattern recognition and can be fruitfully applied to MLP classifiers. This work summarizes our experience in this field. A real-world OCR task is used as a test case to compare different models.<<ETX>>