Determining Optimum Structure for Artificial Neural Networks

Artificial Neural Networks (ANNs) have attracted increasing attention from researchers in many fields, including economics, medicine and computer processing, and have been used to solve a wide range of problems. In remote sensing research, ANN classifiers have been used for many investigations such as land cover mapping, image compression, geological mapping, and meteorological image classification, and have generally proved to be more powerful than conventional statistical techniques, especially when the training data are not normally distributed. The use of ANNs requires some critical decisions on the part of the user, which may affect the accuracy of the resulting classification. In this study, determination of the optimum network structure, which is one of the most important attributes of a network, is investigated. The structure of the network has a direct effect on training time and classification accuracy. Although there is some discussion in the literature of the impact of network structure on the performance of the network, there is no certain method or approach to determine the best structure. Investigations of the relationship between the network structure and the accuracy of the classification are reported here, using a MATLAB tool-kit to take the advantage of scientific visualisation. The effect of the composition of the training data on network structure is also investigated.

[1]  Russell Reed,et al.  Pruning algorithms-a survey , 1993, IEEE Trans. Neural Networks.

[2]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[3]  R. Lippmann,et al.  An introduction to computing with neural nets , 1987, IEEE ASSP Magazine.

[4]  Paul M. Mather,et al.  ASSESSING ARTIFICIAL NEURAL NETWORK PRUNING ALGORITHMS , 2000 .

[5]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[6]  P. Atkinson,et al.  Introduction Neural networks in remote sensing , 1997 .

[7]  Yamashita,et al.  Backpropagation algorithm which varies the number of hidden units , 1989 .

[8]  J. Stephen Judd,et al.  Optimal stopping and effective machine complexity in learning , 1993, Proceedings of 1995 IEEE International Symposium on Information Theory.

[9]  John W. Sammon,et al.  A Nonlinear Mapping for Data Structure Analysis , 1969, IEEE Transactions on Computers.