Unlimited accuracy in layered networks
暂无分享,去创建一个
It is shown that precision requirements on input units may lead to prohibitive learning times when using standard neural learning algorithms even in very simple cases. Two alternative approaches, which achieve fast learning for any accuracy are proposed. The first is a simple preprocessing of inputs. The second one consists in variants of the perceptron rule and error backpropagation with variable learning parameters. Generalisation is only local in the former, global in the latter.