A learning rule in the Chebyshev norm for multilayer perceptrons
暂无分享,去创建一个
An L/sub infinity / version of the back-propagation paradigm is proposed. A comparison between the L/sub 2/ and the L/sub infinity / paradigms is presented, taking into account computational cost and speed of convergence. It is shown how the learning process can be formulated as an optimization problem. Experimental results from two test cases of the convergence of the L/sub infinity / algorithm are presented. >