In the unsupervised learning paradigm, a network of neuron-like units is presented with an ensemble of input patterns from a structured environment, such as the visual world, and learns to represent the regularities in that input. The major goal in developing unsupervised learning algorithms is to find objective functions that characterize the quality of the network's representation without explicitly specifying the desired outputs of any of the units. The sort of objective functions considered cause a unit to become tuned to spatially coherent features of visual images (such as texture, depth, shading, and surface orientation), by learning to predict the outputs of other units which have spatially adjacent receptive fields. Simulations show that using an information-theoretic algorithm called IMAX, a network can be trained to represent depth by observing random dot stereograms of surfaces with continuously varying disparities. Once a layer of depth-tuned units has developed, subsequent layers are trained to perform surface interpolation of curved surfaces, by learning to predict the depth of one image region based on depth measurements in surrounding regions. An extension of the basic model allows a population of competing neurons to learn a distributed code for disparity, which naturally gives rise to a representation of discontinuities.
[1]
Geoffrey E. Hinton,et al.
Discovering Viewpoint-Invariant Relationships That Characterize Objects
,
1990,
NIPS.
[2]
E. Oja.
Simplified neuron model as a principal component analyzer
,
1982,
Journal of mathematical biology.
[3]
Geoffrey E. Hinton.
Connectionist Learning Procedures
,
1989,
Artif. Intell..
[4]
Terence D. Sanger,et al.
An Optimality Principle for Unsupervised Learning
,
1988,
NIPS.
[5]
Steven J. Nowlan,et al.
Maximum Likelihood Competitive Learning
,
1989,
NIPS.
[6]
Ralph Linsker,et al.
Self-organization in a perceptual network
,
1988,
Computer.