Towards a generalization of decompositional approach of rule extraction from multilayer artificial neural network

The current development of knowledge discovery domain has pointed out a high number of applications where the need of explanation is at the heart of the process. Using neural networks for those applications requires to be able to provide a set of rules extracted from the trained neural networks, that can help the user to comprehend the learning process. The current literature reports two kinds of rules: ‘if condition then conclusion’ (called if-then) and ‘if m of conditions then conclusion’ (also called MofN). We propose a new method able to extract one intermediate structure (called generators list) from which it is possible to extract both forms of rules. The extracted structure is a generic representation that gives the possibility to the user to visualize each form of rules extracted from the multilayer artificial neural networks.

[1]  Sebastian Bader,et al.  Extracting Propositional Rules from Feedforward Neural Networks by Means of Binary Decision Diagrams , 2009, NeSy.

[2]  Novruz Allahverdi,et al.  Extracting rules for classification problems: AIS based approach , 2009, Expert Syst. Appl..

[3]  Rudy Setiono Extracting M-of-N rules from trained neural networks , 2000, IEEE Trans. Neural Networks Learn. Syst..

[4]  Max H. Garzon,et al.  Boolean Neural Nets are Observable , 1994, Theor. Comput. Sci..

[5]  Jens Lehmann,et al.  Extracting reduced logic programs from artificial neural networks , 2010, Applied Intelligence.

[6]  Mark Craven,et al.  Extracting comprehensible models from trained neural networks , 1996 .

[7]  Hongjun Lu,et al.  Effective Data Mining Using Neural Networks , 1996, IEEE Trans. Knowl. Data Eng..

[8]  Ignacio Requena,et al.  Are artificial neural networks black boxes? , 1997, IEEE Trans. Neural Networks.

[9]  Yoshua Bengio,et al.  Incorporating Functional Knowledge in Neural Networks , 2009, J. Mach. Learn. Res..

[10]  Krzysztof Michalak,et al.  Influence of data dimensionality on the quality of forecasts given by a multilayer perceptron , 2007, Theor. Comput. Sci..

[11]  Jacek M. Zurada,et al.  Computational intelligence methods for rule-based data understanding , 2004, Proceedings of the IEEE.

[12]  Nelson F. F. Ebecken,et al.  Extracting rules from multilayer perceptrons in classification problems: A clustering-based approach , 2006, Neurocomputing.

[13]  Olcay Boz,et al.  Knowledge Integration and Rule Extraction in Neural Networks , 1995 .

[14]  Sebastian Bader,et al.  Extracting Propositional Rules from Feed-forward Neural Networks - A New Decompositional Approach , 2007, NeSy.

[15]  Gary M. Scott Knowledge-based artificial neural networks for process modelling and control , 1993 .

[16]  Jude W. Shavlik,et al.  Using Sampling and Queries to Extract Rules from Trained Neural Networks , 1994, ICML.

[17]  Michael Margaliot,et al.  Are artificial neural networks white boxes? , 2005, IEEE Transactions on Neural Networks.

[18]  Raúl Rojas,et al.  Neural Networks - A Systematic Introduction , 1996 .

[19]  Dingli Yu,et al.  Selecting radial basis function network centers with recursive orthogonal least squares training , 2000, IEEE Trans. Neural Networks Learn. Syst..

[20]  Rudy Setiono,et al.  Extracting Rules from Neural Networks by Pruning and Hidden-Unit Splitting , 1997, Neural Computation.

[21]  Joachim Diederich,et al.  Survey and critique of techniques for extracting rules from trained artificial neural networks , 1995, Knowl. Based Syst..

[22]  Stephen I. Gallant,et al.  Connectionist expert systems , 1988, CACM.

[23]  Krysia Broda,et al.  Symbolic knowledge extraction from trained neural networks: A sound approach , 2001, Artif. Intell..

[24]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[25]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[26]  Andreas Engel,et al.  Complexity of learning in artificial neural networks , 2001, Theor. Comput. Sci..

[27]  P. Bartlett,et al.  Hardness results for neural network approximation problems , 1999, Theor. Comput. Sci..

[28]  MÉZIANE YACOUB,et al.  Features Selection and Architecture Optimization in Connectionist Systems , 2000, Int. J. Neural Syst..

[29]  Thomas M. Breuel,et al.  Evaluation of robustness and performance of Early Stopping Rules with Multi Layer Perceptrons , 2009, 2009 International Joint Conference on Neural Networks.

[30]  Sebastian Thrun,et al.  Extracting Rules from Artifical Neural Networks with Distributed Representations , 1994, NIPS.

[31]  Engelbert Mephu Nguifo,et al.  M-CLANN: Multiclass Concept Lattice-Based Artificial Neural Network , 2009, Constructive Neural Networks.

[32]  Shlomo Geva,et al.  Quo Vadis? Reliable and Practical Rule Extraction from Neural Networks , 2010, Advances in Machine Learning I.

[33]  D. E. Rumelhart,et al.  Learning internal representations by back-propagating errors , 1986 .

[34]  Dov M. Gabbay,et al.  Connectionist modal logic: Representing modalities in neural networks , 2007, Theor. Comput. Sci..

[35]  Ashish Darbari,et al.  Rule Extraction from Trained ANN: A Survey , 2000 .