Prove of Convergence of Extended Divide & Conquer Networks

The task of determining an effective architecture for multi-layer feed forward backpropagation like neural networks can be a time consuming effort. Over the past couple years several algorithms were proposed for dynamically constructing network architectures. Some of these algorithms have been shown to converge for binary data. Unfortunately, the results do not carry over for the nonbinary input case. For classificatory problems no guarantees are provided, which would suggest otherwise. In this paper, we present an extension to the basic network growing algorithms, which allows constructing networks in bounded time. The algorithm guarantees convergence for any classificatory domain, as long as no contradictory training examples are present. The extension is described for the Divide & Conquer Networks (DCN) algorithm. The derived mathematical model can be readily incorporated into other network growing approaches to ensure convergence. The model is dependent on the usage of simple threshold cells, which can be applied to a variety of learning rules.