Improved image classification with neural networks by fusing multispectral signatures with topographical data

Abstract Automated schemes are needed to classify multispectral remotely sensed data. Human intelligence is often required to correctly interpret images from satellites and aircraft. Humans succeed because they use various types of cues about a scene to accurately define the contents of the image. Consequently, it follows that computer techniques that integrate and use different types of information would perform better than single source approaches. This research illustrated that multispectral signatures and topographical information could be used in concert. Significantly, this dual source tactic classified a remotely sensed image better than the multispectral classification alone. These classifications were accomplished by fusing spectral signatures with topographical information using neural network technology. A neural network was trained to classify Landsat multispectral images of the Black Hills. Bands 4, 5, 6, and 7 were used to generate four classifications based on the spectral signatures. A file of georeferenced ground truth classifications was used as the training criterion. The network was trained to classify urban, agricultural, range, and forest with a 65.7% correctness. Another neural network was programmed and trained to fuse these multispectral signature results with a file of georeferenced altitude data. This topographical file contained 10 levels of elevations. When this nonspectral elevation information was fused with the spectral signatures, the classifications were improved to 73.7% and 75.7%.