Parallel neural network learning through repetitive bounded depth trajectory branching

The neural network learning process is a sequence of network updates and can be represented by sequence of points in the weight space that we call a 'learning trajectory'. In this paper, a new learning approach based on repetitive bounded depth trajectory branching is proposed. This approach has objectives of improving generalization and speeding up convergence by avoiding local minima when selecting an alternative trajectory. The experimental results show an improved generalization compared to the standard backpropagation learning algorithm. The proposed parallel implementation dramatically improves the algorithm efficiency to the level that computing time is not a critical factor in achieving improved generalization.<<ETX>>