Learning fuzzy decision trees

We present a recurrent neural network which learns to suggest the next move during the descent along the branches of a decision tree. More precisely, given a decision instance represented by a node in the decision tree, the network provides the degree of membership of each possible move to the fuzzy set z.Lt;good movez.Gt;. These fuzzy values constitute the core of the probability of selecting the move out of the set of the children of the current node.This results in a natural way for driving the sharp discrete-state process running along the decision tree by means of incremental methods on the continuous-valued parameters of the neural network. The bulk of the learning problem consists in stating useful links between the local decisions about the next move and the global decisions about the suitability of the final solution. The peculiarity of the learning task is that the network has to deal explicitly with the twofold charge of lighting up the best solution and generating the move sequence that leads to that solution. We tested various options for the learning procedure on the problem of disambiguating natural language sentences.

[1]  Eugene Charniak,et al.  Statistical language learning , 1997 .

[2]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[3]  David Haussler,et al.  Learning decision trees from random examples , 1988, COLT '88.

[4]  P J Webros BACKPROPAGATION THROUGH TIME: WHAT IT DOES AND HOW TO DO IT , 1990 .

[5]  Mihir Bellare A technique for upper bounding the spectral norm with applications to learning , 1992, COLT '92.

[6]  Michael I. Jordan,et al.  Learning piecewise control strategies in a modular neural network architecture , 1993, IEEE Trans. Syst. Man Cybern..

[7]  B. Apolloni,et al.  Learning to solve PP-attachment ambiguities in natural language processing through neural networks , 1992, CompEuro 1992 Proceedings Computer Systems and Software Engineering.

[8]  Eyal Kushilevitz,et al.  Learning decision trees using the Fourier spectrum , 1991, STOC '91.

[9]  Wayne Ieee,et al.  Entropy Nets: From Decision Trees to Neural Networks , 1990 .

[10]  Edward Szczerbicki,et al.  Decision trees and neural networks for reasoning and knowledge acquisition for autonomous agents , 1996, Int. J. Syst. Sci..

[11]  David G. Stork,et al.  Parallel analog neural networks for tree searching , 1987 .

[12]  Miroslav Kubat,et al.  Initialization of neural networks by means of decision trees , 1995, Knowl. Based Syst..

[13]  Tim van Gelder,et al.  Compositionality: A Connectionist Variation on a Classical Theme , 1990, Cogn. Sci..

[14]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.

[15]  Jing Peng,et al.  An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories , 1990, Neural Computation.

[16]  William B. Yates,et al.  Engineering Multiversion Neural-Net Systems , 1996, Neural Computation.

[17]  Howard Raiffa,et al.  Decision analysis: introductory lectures on choices under uncertainty. 1968. , 1969, M.D.Computing.

[18]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, CACM.

[19]  Christos H. Papadimitriou,et al.  Computational complexity , 1993 .

[20]  Ravija Badarinathi Introduction to SPSS , 2018, SPSS Data Analysis for Univariate, Bivariate, and Multivariate Statistics.

[21]  D. F. Morrison,et al.  Multivariate Statistical Methods , 1968 .

[22]  Jordan B. Pollack,et al.  Recursive Distributed Representations , 1990, Artif. Intell..

[23]  N. Metropolis,et al.  Equation of State Calculations by Fast Computing Machines , 1953, Resonance.

[24]  Eiji Takimoto,et al.  Proper Learning Algorithm for Functions of k Terms under Smooth Distributions , 1999, Inf. Comput..