This paper presents some basic algorithms for manipulating decision trees with thresholds. The algorithms are based on discrete decision theory. This algebraic approach to discrete decision theory, in particular, provides syntactic techniques for reducing the size of decision trees. If one takes the view that the object of a learning algorithm is to give an economical representation of the observations then this reduction technique provides the key to a method of learning. The basic algorithms to support the incremental learning of decision trees are discussed together with the modifications required to perform reasonable learning when threshold decisions are present. The main algorithm discussed is an incremental learning algorithm which works by maintaining an association irreducible tree representing the observations. At each iteration a new observation is added and an efficient reduction of the tree enlarged by that example is undertaken. The results of some simple experiments are discussed which suggest that this method of learning holds promise and may in some situations out perform standard heuristic techniques.
[1]
John Douglas Birdwell,et al.
Stochastic Decision Theory
,
1987,
Probability in the Engineering and Informational Sciences.
[2]
Ellis Horowitz,et al.
Fundamentals of Data Structures
,
1984
.
[3]
J. R Cockett.
Decision Expression Optimization
,
1985
.
[4]
Bernard M. E. Moret,et al.
The Activity of a Variable and Its Relation to Decision Trees
,
1980,
TOPL.
[5]
J. R. B. Cockett.
Discrete Decision Theory: Manipulations
,
1987,
Theor. Comput. Sci..
[6]
J. R. B. Cockett.
Aspects of expert systems
,
1988
.
[7]
J. Robin B. Cockett,et al.
Decision tree reduction
,
1990,
JACM.
[8]
D Hölzel,et al.
Aspects of expert systems applications in medicine
,
1986
.
[9]
Ronald L. Rivest,et al.
Constructing Optimal Binary Decision Trees is NP-Complete
,
1976,
Inf. Process. Lett..