Connection of the beam width and the learning success rate in the phase transition framework for relational learning

It is well-known that heuristic search in relation learning is prone to plateau phenomena. An explanation is that the relational learning covering test is NP-complete and therefore exhibits a sharp phase transition in its coverage probability. For heuristic value of a hypothesis depends on the amount of covered examples, the regions “yes” and “no” have no informative heuristic value, and the regions “yes” and “no” represent plateaus. Marco Botta et al. run several learning algorithms on a large data set of artificially generated problems and point out that the occurrence of phase transition dooms every learning algorithm to fail to identify the target concept. However, we note that they did not consider the influence of beam width on learning success rate. In this paper, we investigate the problem that whether the low learning success rate due to phase transition can be enhanced by enlarging the beam width. FOIL and KFOIL learning systems are respectively run on artificially generated data set according to different beam width 1, 5, 10, 20 and 30. Experiments show that beam width has almost no effect on learning success rate under phase transition framework.

[1]  Michèle Sebag,et al.  Analyzing Relational Learning in the Phase Transition Framework , 2000, ICML.

[2]  Włodzisław Duch,et al.  Artificial intelligence approaches for rational drug design and discovery. , 2007, Current pharmaceutical design.

[3]  Stephen Muggleton,et al.  Efficient Induction of Logic Programs , 1990, ALT.

[4]  Michèle Sebag,et al.  Relational Learning as Search in a Critical Region , 2003, J. Mach. Learn. Res..

[5]  Thanh Phuong Nguyen,et al.  Prediction of Domain-Domain Interactions Using Inductive Logic Programming from Multiple Genome Databases , 2006, Discovery Science.

[6]  Raymond J. Mooney,et al.  Learning with markov logic networks: transfer learning, structure learning, and an application to web query disambiguation , 2009 .

[7]  Maozu Guo,et al.  Web Page Classification Using Relational Learning Algorithm and Unlabeled Data , 2011, J. Comput..

[8]  Ismail Hakki Toroslu,et al.  A comparative study on ILP-based concept discovery systems , 2011, Expert Syst. Appl..

[9]  Y. Dehbi,et al.  Learning grammar rules of building parts from precise models and noisy observations , 2011 .

[10]  Stephen Muggleton,et al.  Machine Invention of First Order Predicates by Inverting Resolution , 1988, ML.

[11]  M. Biba Integrating Logic and Probability: Algorithmic Improvements in Markov Logic Networks , 2009 .

[12]  Fatos Xhafa,et al.  Stochastic simulation and modelling of metabolic networks in a machine learning framework , 2011, Simul. Model. Pract. Theory.

[13]  Stefan Kramer,et al.  Inductive logic programming for gene regulation prediction , 2007, Machine Learning.

[14]  Marco Botta,et al.  An Experimental Study of Phase Transitions in Matching , 1999, IJCAI.

[15]  Lorenza Saitta,et al.  Phase Transitions in Relational Learning , 2000, Machine Learning.

[16]  Aomar Osmani,et al.  On the connection between the phase transition of the covering test and the learning success rate in ILP , 2008, Machine Learning.

[17]  Luc De Raedt,et al.  kFOIL: Learning Simple Relational Kernels , 2006, AAAI.

[18]  J. R. Quinlan Learning Logical Definitions from Relations , 1990 .

[19]  Tu Bao Ho,et al.  Using Inductive Logic Programming for Predicting Protein-Protein Interactions from Multiple Genomic Data , 2005, PKDD.

[20]  Luc De Raedt,et al.  nFOIL: Integrating Naïve Bayes and FOIL , 2005, AAAI.

[21]  Luc De Raedt,et al.  Integrating Naïve Bayes and FOIL , 2007, J. Mach. Learn. Res..

[22]  Jesse Davis,et al.  Using Bayesian Classifiers to Combine Rules , 2004 .

[23]  Luc De Raedt Logical and Relational Learning , 2008, SBIA.

[24]  Igor Kononenko,et al.  Naive Bayesian classifier within ILP-R , 1995 .