Learning Classifier Systems differ from many other classification techniques, in that new rules are constantly discovered and evaluated. This feature of LCS gives rise to an important problem, how to deal with estimates of rule accuracy that are unreliable due to the small number of performance samples available. In this paper we highlight the importance of this problem for LCS, summarise previous heuristic approaches to the problem, and propose instead the use of principles from Bayesian estimation. In particular we argue that discounting estimates of accuracy based on inexperience must be recognised as a crucially important part of the specification of LCS, and must be well motivated. We present experimental results on using the Bayesian approach to discounting, consider how to estimate the parameters for it, and identify benefits of its use for other areas of LCS.
[1]
Thomas G. Dietterich.
What is machine learning?
,
2020,
Archives of Disease in Childhood.
[2]
Gavin Brown,et al.
UCSpv: principled voting in UCS rule populations
,
2007,
GECCO '07.
[3]
MSc PhD Tim Kovacs BA.
Strength or Accuracy: Credit Assignment in Learning Classifier Systems
,
2004,
Distinguished Dissertations.
[4]
Martin V. Butz,et al.
An algorithmic description of XCS
,
2000,
Soft Comput..
[5]
Ester Bernadó-Mansilla,et al.
Accuracy-Based Learning Classifier Systems: Models, Analysis and Applications to Classification Tasks
,
2003,
Evolutionary Computation.