Classification boundaries and gradients of trained multilayer perceptrons

An approach for query-based neural network learning is presented. Consider a layered perceptron partially trained for binary classification. The single output neuron is trained to be either a 0 or a 1. A test decision is made by thresholding the output at, for instance, 1/2. The set of inputs that produce an output of 1/2 forms the classification boundary. An inversion algorithm is adopted for the neural network that allows generation of this boundary. In addition, the classification gradient can be generated for each boundary point. The gradient provides a useful measure of the sharpness of the multidimensional decision surfaces. Using the boundary point and gradient information, conjugate input pair locations are generated and presented to an oracle for proper classification. These data are used to further refine the classification boundary, thereby increasing the classification accuracy. The result can be a significant reduction in the training set cardinality in comparison with, for example, randomly generated data points.<<ETX>>

[1]  David A. Cohn,et al.  Training Connectionist Networks with Queries and Selective Sampling , 1989, NIPS.

[2]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[3]  A. Linden,et al.  Inversion of multilayer nets , 1989, International 1989 Joint Conference on Neural Networks.

[4]  Jenq-Neng Hwang,et al.  Query learning based on boundary search and gradient computation of trained multilayer perceptrons , 1990, 1990 IJCNN International Joint Conference on Neural Networks.