Multiplication units in feedforward neural networks and its training

This paper proposes the application of neural networks with multiplication units to parity-N problem, mirror symmetry problem and a function approximation problem. It is clear that, higher-order terms in neural networks, such as sigma-pi unit, can improve the computational power of neural networks considerably. But how the real neurons do this is still unclear. We have used one multiplication unit to construct full higher-order terms of all the inputs, which was proved very efficient for parity-N problem. Our earlier work on applying multiplication units to other problems suffered from the drawback of gradient-based algorithm, such as backpropagation algorithms, for being easy to stuck at local minima due to the complexity of the network. In order to overcome this problem we consider a novel random search, RasID, for the training of neural networks with multiplication units, which does an intensified search where it is easy to find good solutions locally and a diversified search to escape from local minima under a pure random search scheme. The method shows its advantage on the training of neural networks with multiplication units.