Refining Numerical Terms in Horn Clauses

This paper presents an experimental analysis of a method recently proposed for refining knowledge bases expressed in a first order logic language. The method consists in transforming a classification theory into a neural network, called First Order logic Neural Network (FONN), by replacing predicate semantic functions and logical connectives with continuous-valued derivable functions. In this way it is possible to tune numerical constants in the original theory by performing the error gradient descent. The classification theory to be refined can be manually handcrafted or automatically acquired by a symbolic relational learning system able to deal with numerical features. The emphasis of this paper is on the experimental analysis of the method and an extensive experimentation is provided considering different choices for encoding the logical connectives, and different variants of the learning strategy. The experimentation is made on a challenging artificial case study and shows that FONNs converge quite fastly and generalize better than propositional learners do on an equivalent task definition.