Analysis of the Internal Representations in Neural Networks for Machine Intelligence
暂无分享,去创建一个
The internal representation of the training patterns of multi-layer perceptrons was examined and we demonstrated that the connection weights between layers are effectively transforming the representation format of the information from one layer to another one in a meaningful way. The internal code, which can be in analog or binary form, is found to be dependent on a number of factors, including the choice of an appropriate representation of the training patterns, the similarities between the patterns as well as the network structure; i.e. the number of hidden layers and the number of hidden units in each layer.
[1] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[2] Frank Fallside,et al. An adaptive training algorithm for back propagation networks , 1987 .
[3] David J. Burr,et al. Experiments on neural net recognition of spoken and written text , 1988, IEEE Trans. Acoust. Speech Signal Process..
[4] James L. McClelland,et al. Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .