Intrinsic and Parallel Performances of the OWE Neural Network Architecture

The OWE (Orthogonal Weight Estimator) architecture is constituted of a main MLP in which the values of each weight is computed by another MLP (an OWE). The number of OWEs is equal to the number of weights of the main MLP. But the computation of each OWE is done independently. Therefore the training and relaxation phases can straightforward parallelized. We report the implementation of this architecture on an Intel Paragon parallel computer and the comparison with its implementation on a sequential computer.