Parallel neural network learning through repetitive bounded depth trajectory branching
暂无分享,去创建一个
The neural network learning process is a sequence of network updates and can be represented by sequence of points in the weight space that we call a 'learning trajectory'. In this paper, a new learning approach based on repetitive bounded depth trajectory branching is proposed. This approach has objectives of improving generalization and speeding up convergence by avoiding local minima when selecting an alternative trajectory. The experimental results show an improved generalization compared to the standard backpropagation learning algorithm. The proposed parallel implementation dramatically improves the algorithm efficiency to the level that computing time is not a critical factor in achieving improved generalization.<<ETX>>
[1] Etienne Barnard,et al. Avoiding false local minima by proper initialization of connections , 1992, IEEE Trans. Neural Networks.
[2] E T. Leighton,et al. Introduction to parallel algorithms and architectures , 1991 .
[3] Sholom M. Weiss,et al. An Empirical Comparison of Pattern Recognition, Neural Nets, and Machine Learning Classification Methods , 1989, IJCAI.
[4] K. Lang,et al. Learning to tell two spirals apart , 1988 .