Self-organizing neural network that discovers surfaces in random-dot stereograms
暂无分享,去创建一个
THE standard form of back-propagation learning1 is implausible as a model of perceptual learning because it requires an external teacher to specify the desired output of the network. We show how the external teacher can be replaced by internally derived teaching signals. These signals are generated by using the assumption that different parts of the perceptual input have common causes in the external world. Small modules that look at separate but related parts of the perceptual input discover these common causes by striving to produce outputs that agree with each other (Fig. la). The modules may look at different modalities (such as vision and touch), or the same modality at different times (for example, the consecutive two-dimensional views of a rotating three-dimensional object), or even spatially adjacent parts of the same image. Our simulations show that when our learning procedure is applied to adjacent patches of two-dimensional images, it allows a neural network that has no prior knowledge of the third dimension to discover depth in random dot stereograms of curved surfaces.
[1] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[2] R. Tibshirani,et al. Generalized additive models for medical research , 1986, Statistical methods in medical research.
[3] S. Lehky,et al. Neural model of stereoacuity and depth interpolation based on a distributed representation of stereo disparity [published erratum appears in J Neurosci 1991 Mar;11(3):following Table of Contents] , 1990, The Journal of neuroscience : the official journal of the Society for Neuroscience.