Stretch and hammer neural networks

Stretch and hammer neural networks use radial basis function methods to achieve advantages in generalizing training examples. These advantages include (1) exact learning, (2) maximally smooth modeling of Gaussian deviations from linear relationships, (3) identical outputs for arbitrary linear combination of inputs, and (4) training without adjustable parameters in a predeterminable number of steps. Stretch and hammer neural networks are feedforward architectures that have separate hidden neuron layers for stretching and hammering in accordance with an easily visualized physical model. Training consists of (1) transforming the inputs to principal component coordinates, (2) finding the least squares hyperplane through the training points, (3) finding the Gaussian radial basis function variances at the column diagonal dominance limit, and (4) finding the Gaussian radial basis function coefficients. The Gaussian radial basis function variances are chosen to be as large as possible consistent with maintaining diagonal dominance for the simultaneous linear equations that must be solved to obtain the basis function coefficients. This choice insures that training example generalization is maximally smooth consistent with unique training in a predeterminable number of steps. Stretch and hammer neural networks have been used successfully in several practical applications.