This paper introduces and explores some representational biases for efficient learning of spatial, temporal, or spatio-temporal patterns in connectionist networks (CN) - massively parallel networks of simple computing elements. It examines learning mechanisms that constructively build up network structures that encode information from environmental stimuli at successively higher resolutions as needed for the tasks (e.g., perceptual recognition) that the network has to perform. Some simple examples are presented to illustrate the the basic structures and processes used in such networks to ensure the parsimony of learned representations - by guiding the system to focus its efforts at the minimal adequate resolution. Several extensions of the basic algorithm for efficient learning using multi-resolution representations of spatial, temporal, or spatio-temporal patterns are discussed. 1. Multi-Resolution Iconic Representations Environmental stimuli (e.g., 2-dimensional visual images) typically contain features over multiple scales. Multiresolution pattern encodings provide a basis for analyzing features in the environmental stimuli at different scales (Uhr, 1972; Rosenfeld, 1984; Dyer, 1987). This section introduces multi-resolution representations and their use in efficientlearning of spatial, temporal, and spatio-temporal patterns. Typically, a multi-resolution encoding scheme transforms the input (e.g., a 2-dimensional image) into a set of maps at successively coarser resolutions, each making explicit image features at a specificscale. An example of such a scheme is the gaussian pyramid in which successively higher levels encode blurred and sub-sampled versions of the immediately lower level (where the blurring may be applied to any local property of the stimulus e.g., intensity, color, texture, etc). If the stimulus is a 512x512 image, blurring with a 2-dimensional gaussian kernel g (x ,y ) = 2πγ 2
[1]
Azriel Rosenfeld,et al.
Multiresolution image processing and analysis
,
1984
.
[2]
Leonard Uhr,et al.
Layered "Recognition Cone" Networks That Preprocess, Classify, and Describe
,
1972,
IEEE Transactions on Computers.
[3]
Vasant Honavar,et al.
Generative learning structures and processes for generalized connectionist networks
,
1993,
Inf. Sci..
[4]
S. Tanimoto,et al.
Structured computer vision: Machine perception through hierarchical computation structures
,
1980
.
[5]
Vasant Honavar.
Learning Parsimonious Representations of Three-Dimensional Shapes
,
1992
.
[6]
Christian Lebiere,et al.
The Cascade-Correlation Learning Architecture
,
1989,
NIPS.
[7]
Stephen I. Gallant,et al.
Perceptron-based learning algorithms
,
1990,
IEEE Trans. Neural Networks.
[8]
Vasant Honavar,et al.
Brain-structured Connectionist Networks that Perceive and Learn
,
1989
.
[9]
Vasant G Honavar.
Perceptual Development and Learning: From Behavioral, Neurophysiological, and Morphological Evidence To Computational Models
,
1989
.
[10]
Stephen Grossberg,et al.
A massively parallel architecture for a self-organizing neural pattern recognition machine
,
1988,
Comput. Vis. Graph. Image Process..
[11]
Charles R. Dyer,et al.
Multiscale image understanding
,
1987
.
[12]
Allen R. Hanson,et al.
Processing Cones: A Computational Structure for Image Analysis.
,
1981
.