Using Background Knowledge in Multilayer Perceptron Learning

In this contribution we present a method for constraining the learning of a Multi-Layer Perceptron network with background knowledge. The algorithms presented here can be used to train the partial derivatives of the network to match given numerical values or to minimize a given cost function. Thus the mapping produced by the network can be constrained according to known input-output models, monotonicity conditions, saturation eeects, or any other knowledge that is related to the model derivatives. We demonstrate the performance of the proposed training method with artiicial data, and also with actual process modeling application.