Rejection of Incorrect Answers from a Neural Net Classifier

The notion of approximator rejection is described, and applied to a neural network. For a real world classification problem the residual error is shown to decrease with the inverse exponential of the fraction of patterns rejected. The trade-off of “good” patterns rejected and “bad” patterns rejected is shown to increase approximately linearly with rejection rate. A compromise is therefore necessary between trade-off/rejection rate and residual error. A meta-level solution is proposed for removal of the residual error, through use of a modular system of parallel approximators.