Concept Support as a Method for Programming Neural Networks with Symbolic Knowledge

Neural networks are usually seen as obtaining all their knowledge through training on the basis of examples. In many AI applications appropriate for neural networks, however, symbolic knowledge does exist which describes a large number of cases relatively well, or at least contributes to partial solutions. From a practical point of view it appears to be a waste of resources to give up this knowledge altogether by training a network from scratch. This paper introduces a method for inserting symbolic knowledge into a neural network-called “concept support.” This method is non-intrusive in that it does not rely on immediately setting any internal variable, such as weights. Instead, knowledge is inserted through pre-training on concepts or rules believed to be essential for the task. Thus the knowledge actually accessible for the neural network remains distributed or -as it is called-subsymbolic. Results from a test application are reported which show considerable improvements in generalization.