Training the piecewise linear-high order neural network through error back propagation

The piecewise linear high-order neural network, a structure consisting of two layers of modifiable weights, is introduced. The hidden units implement a piecewise linear function in the augmented input space, which includes high-order terms. The output units perform a linear threshold function over the hidden unit responses. The model is implemented using a self-adapting fast backpropagation algorithm based on the SuperSAB algorithm. Simulation results on the XOR/parity problem up to dimension three shown that for these networks the speed convergence is several times faster than for standard feedforward multilayer networks, as well as for sigma-pi networks.<<ETX>>