An architecture for very large neural networks with high connectivity
暂无分享,去创建一个
The phenomenal interest over the last few years in modelling recognition and cognitive processes within a neural network or connectionist framework has resulted in numerous attempts to develop realisations of such systems using optical and VLSI technologies. A neural network is an interconnected structure of many simple nonlinear processing elements which learn from examples to form an internal representation of a problem. Computation is performed collectively by these processing elements, and hence activity is distributed throughout the network. Inherent in this brief description of network operation is the high degree of parallelism present. In classical pattern recognition terms, the feature metrics which make individual object classes similar, result in the formation of clusters in n-dimensional pattern space. For multiple layer perceptrons (MLPs)-the most widely exploited network topology-these clusters are isolated by surrounding each cluster by decision hyperplanes. The MLP is trained on supplied examples of each pattern class, and the decision regions are positioned by some form of gradient descent algorithm which iteratively adapts the synaptic weights of each neuron. MLPs are an example of supervised learning in a feedforward network. The authors concentrate on a purely digital realisation of neural networks based on an unsupervised learning situation, which is a form of adaptation to an unknown environment. >