The ACE Data Mining System User's Manual

A vehicle having left and right lower rear brake lights and a center high-mounted stop lamp (CHMSL) flashes the CHMSL when brakes are initially applied. The lower brake lights do not automatically flash. After a predetermined duration, the CHMSL remains continuously activated until brakes are removed. A semiconductor oscillator circuit is configured to be energized when brakes are applied. It produces an oscillating signal which is responsible for causing the CHMSL to flash. A semiconductor timer circuit is also configured to be energized when brakes are applied. It produces a time out signal which activates a predetermined duration after brakes are applied and which is responsible for causing the CHMSL to remain continuously activated. The outputs of the oscillator and timer circuits are combined at a semiconductor switch which drives the CHMSL.

[1]  Ashwin Srinivasan,et al.  Query Transformations for Improving the Efficiency of ILP Systems , 2003, J. Mach. Learn. Res..

[2]  Saso Dzeroski,et al.  First order random forests: Learning relational classifiers with complex aggregates , 2006, Machine Learning.

[3]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[4]  Andrew W. Moore,et al.  Reinforcement Learning: A Survey , 1996, J. Artif. Intell. Res..

[5]  Hendrik Blockeel,et al.  Top-Down Induction of First Order Logical Decision Trees , 1998, AI Commun..

[6]  De,et al.  Relational Reinforcement Learning , 2001, Encyclopedia of Machine Learning and Data Mining.

[7]  Luc Dehaspe,et al.  Discovery of relational association rules , 2001 .

[8]  L. D. Raedt,et al.  Multi-class Problems and Discretization in Icl Extended Abstract , 1996 .

[9]  L. Breiman OUT-OF-BAG ESTIMATION , 1996 .

[10]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[11]  Saso Dzeroski,et al.  Integrating Experimentation and Guidance in Relational Reinforcement Learning , 2002, ICML.

[12]  Jan Ramon,et al.  Transfer learning for reinforcement learning through goal and policy parametrization , 2006, ICML 2006.

[13]  Wim Van Laer From Propositional to First Order Logic in Machine Learning and Data Mining - Induction of first ord , 2002 .

[14]  Kurt Driessens,et al.  Speeding Up Relational Reinforcement Learning through the Use of an Incremental First Order Decision Tree Learner , 2001, ECML.

[15]  Luc De Raedt,et al.  Lookahead and Discretization in ILP , 1997, ILP.

[16]  Luc Dehaspe Frequent Pattern Discovery in First-Order Logic , 1999, AI Commun..

[17]  Heikki Mannila,et al.  Fast Discovery of Association Rules , 1996, Advances in Knowledge Discovery and Data Mining.

[18]  Celine Vens,et al.  Refining Aggregate Conditions in Relational Learning , 2006, PKDD.

[19]  Ron Kohavi,et al.  Supervised and Unsupervised Discretization of Continuous Features , 1995, ICML.

[20]  Saso Dzeroski,et al.  Combining model-based and instance-based learning for first order regression , 2005, BNAIC.

[21]  Thomas Gärtner,et al.  Graph kernels and Gaussian processes for relational reinforcement learning , 2006, Machine Learning.

[22]  Usama M. Fayyad,et al.  Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning , 1993, IJCAI.

[23]  Luc De Raedt,et al.  Top-Down Induction of Clustering Trees , 1998, ICML.

[24]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[25]  Luc De Raedt,et al.  Inductive Constraint Logic , 1995, ALT.

[26]  Luc De Raedt,et al.  Clausal Discovery , 1997, Machine Learning.

[27]  L. D. Raedt,et al.  Three companions for data mining in first order logic , 2001 .

[28]  Bart Demoen,et al.  Improving the Efficiency of Inductive Logic Programming Through the Use of Query Packs , 2011, J. Artif. Intell. Res..

[29]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[30]  Jan Ramon On the convergence of reinforcement learning using a decision tree learner , 2005, ICML 2005.

[31]  金田 重郎,et al.  C4.5: Programs for Machine Learning (書評) , 1995 .

[32]  Saso Dzeroski,et al.  Integrating Guidance into Relational Reinforcement Learning , 2004, Machine Learning.

[33]  Maurice Bruynooghe,et al.  A Comparison of Approaches for Learning Probability Trees , 2005, ECML.

[34]  M. F.,et al.  Bibliography , 1985, Experimental Gerontology.

[35]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[36]  Bojan Dolsak,et al.  The Application of Inductive Logic Programming to Finite Element Mesh Design , 1992 .

[37]  Leslie Pack Kaelbling,et al.  Input Generalization in Delayed Reinforcement Learning: An Algorithm and Performance Comparisons , 1991, IJCAI.

[38]  Claire Nédellec,et al.  Declarative Bias in ILP , 1996 .

[39]  Hannu Toivonen,et al.  Discovery of frequent DATALOG patterns , 1999, Data Mining and Knowledge Discovery.

[40]  Kurt Driessens,et al.  Relational Instance Based Regression for Relational Reinforcement Learning , 2003, ICML.

[41]  Celine Vens,et al.  ReMauve: A Relational Model Tree Learner , 2006, ILP.

[42]  Hendrik Blockeel,et al.  Query Optimization in Inductive Logic Programming by Reordering Literals , 2003, ILP.