A comparison of extreme learning machines and back-propagation trained feed-forward networks processing the mnist database

This paper compares the classification performance and training times of feed-forward neural networks with one hidden layer trained with the two network weight optimisation methods. The first weight optimisation method used the extreme learning machine (ELM) algorithm. The second weight optimisation method used the back-propagation (BP) algorithm. Using identical network topologies the two weight optimization methods were directly compared using the MNIST handwritten digit recognition database. Our results show that, while the ELM weight optimization method was much faster to train for a given network topology, a much larger number of hidden units were required to provide a comparable performance level to the BP algorithm. When the extra computation due to larger number of hidden units was taken in to account for the ELM network, the computation times of the two methods to achieve a similar performance level was not so different.