Does Terminal Attractor Backpropagation Guarantee Global Optimization
暂无分享,去创建一个
Recently, Wang et al. (Wang et al., 1991) have introduced two new learning algorithms, called TABP and HTABP1, that are based on the properties of terminal attractors. These algorithms were claimed to perform global optimization of the cost in finite time, provided that a null solution exists. In this paper, we prove that, unfortunately, there are no theoretical guarantees that a global solution will be reached, unless the learning process begins in the domain of attraction of the global minimum. When a local minimum basin is entered, quite random jumps in the weight space take place that may led to cycles. Moreover, when approaching local minima, overflow errors may also occur that force the learning to stop. Finally, particular care must be taken in order to avoid numerical problems that may occur even when approaching global minimum.
[1] Chi-Ping Tsang,et al. On the Convergence of Feed Forward Neural Networks Incorporating Terminal Attractors , 1993 .
[2] Michail Zak,et al. Terminal attractors in neural networks , 1989, Neural Networks.
[3] Ching-Chi Hsu,et al. Terminal attractor learning algorithms for back propagation neural networks , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.