Q-Learning and Redundancy Reduction in Classifier Systems with Internal State

The Q-Credit Assignment (QCA) is a method, based on Q-learning, for allocating credit to rules in Classifier Systems with internal state. It is more powerful than other proposed methods, because it correctly evaluates shared rules, but it has a large computational cost, due to the Multi-Layer Perceptron (MLP) that stores the evaluation function. We present a method for reducing this cost by reducing redundancy in the input space of the MLP through feature extraction. The experimental results show that the QCA with Redundancy Reduction (QCA-RR) preserves the advantages of the QCA while it significantly reduces both the learning time and the evaluation time after learning.