A New Incremental Learning Technique For Decision Trees With Thresholds

This paper presents some basic algorithms for manipulating decision trees with thresholds. The algorithms are based on discrete decision theory. This algebraic approach to discrete decision theory, in particular, provides syntactic techniques for reducing the size of decision trees. If one takes the view that the object of a learning algorithm is to give an economical representation of the observations then this reduction technique provides the key to a method of learning. The basic algorithms to support the incremental learning of decision trees are discussed together with the modifications required to perform reasonable learning when threshold decisions are present. The main algorithm discussed is an incremental learning algorithm which works by maintaining an association irreducible tree representing the observations. At each iteration a new observation is added and an efficient reduction of the tree enlarged by that example is undertaken. The results of some simple experiments are discussed which suggest that this method of learning holds promise and may in some situations out perform standard heuristic techniques.