The influence of relative sample size in training artificial neural networks

Abstract This Letter explores the impact of the relative size of the sample sets used to define candidate classes on the classification accuracy obtained using artifical neural network techniques. It is suggested that, to avoid any classification bias, samples should be weighted appropriately to reflect the ‘complexity’ of each class. Thus, broadly defined classes with a high intra-class variability, such as ‘built’, should be trained on larger samples than more narrowly defined classes, such as ‘soil’. The Letter also highlights the degree of variation between runs, a consequence of converging towards local rather than global error minima.