The ability to automatically interpret images has long been one of the major challenges for modern computer vision. It is an important problem, particularly with the vast amount of image data available globally, and our need to analyse this information for content [1, 2]. This paper shows how a neural network may be trained to recognise objects in outdoor scenes. The data used are extracted from the Bristol Image Database. This is a large set of high-quality colour images of outdoor scenes, with a known groundtruth labelling. The technique is to segment the images, extract features for each region and then train the arti cial neural network to act as a Bayesian classi er. For a set of unseen test images, and with knowledge of the groundtruth labelling, we may quantify the performance of the classi er.
[1]
Teuvo Kohonen,et al.
Improved versions of learning vector quantization
,
1990,
1990 IJCNN International Joint Conference on Neural Networks.
[2]
Neill W. Campbell,et al.
Automatic Interpretation of Outdoor Scenes
,
1995,
BMVC.
[3]
Alex Pentland,et al.
Photobook: tools for content-based manipulation of image databases
,
1994,
Other Conferences.
[4]
Dragutin Petkovic,et al.
Query by Image and Video Content: The QBIC System
,
1995,
Computer.
[5]
Bt Thomas,et al.
Neural networks for the segmentation of outdoor images
,
1996
.