Generalization in layered classification neural networks

The authors demonstrate how the use of arbitrary nonlinearities can improve the storage capacity of a class of layered classification artificial neural networks (L-CANNs). The network's storage capacity is on the order of the number of neurons used to stimulate the response. L-CANNs can be trained by viewing the training data only once. Classification boundaries corresponding to maximum points of confusion, if known, can also be learned. Iteration is not required in the recall mode. The manner in which a network responds to data outside the training set can be straightforwardly evaluated. The L-CANN also has the ability to recognize the unfamiliarity of stimuli for which it was not trained.<<ETX>>