Employing Automatic Temporal Abstractions to Accelerate Utile Suffix Memory Algorithm

The main objective of the memory based reinforcement learning algorithms for hidden state problems is to overcome the state aliasing issue using a form of short term memory during learning. Extended sequence tree method, on the other hand, is a sequence based automated temporal abstraction mechanism that can be appended to a reinforcement learning algorithm. Assuming a fully observable problem setting, it tries to find useful sub-policies in solution space that can be reused as timed actions, providing significant savings in terms of learning time. This paper presents a way to expand a well known memory based model-free reinforcement learning algorithm, namely Utile Suffix Memory, by using a modified version of extended sequence tree method. By this way, learning speed of the algorithm is increased under certain conditions. Enhancement is shown empirically via experimentation on some benchmark problems.

[1]  Andrew McCallum,et al.  Reinforcement learning with selective perception and hidden state , 1996 .

[2]  Takeshi Yoshikawa,et al.  An Acquiring Method of Macro-Actions in Reinforcement Learning , 2006, 2006 IEEE International Conference on Systems, Man and Cybernetics.

[3]  Reda Alhajj,et al.  Improving reinforcement learning by using sequence trees , 2010, Machine Learning.

[4]  Lonnie Chrisman,et al.  Reinforcement Learning with Perceptual Aliasing: The Perceptual Distinctions Approach , 1992, AAAI.

[5]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[6]  Andrew G. Barto,et al.  Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density , 2001, ICML.

[7]  Leslie Pack Kaelbling,et al.  Learning Policies with External Memory , 1999, ICML.

[8]  Doina Precup,et al.  Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning , 1999, Artif. Intell..

[9]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[10]  Takashi Komeda,et al.  REINFORCEMENT LEARNING FOR POMDP USING STATE CLASSIFICATION , 2008, MLMTA.

[11]  Leslie Pack Kaelbling,et al.  Learning Policies for Partially Observable Environments: Scaling Up , 1997, ICML.

[12]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[13]  Faruk Polat,et al.  Generating Memoryless Policies Faster Using Automatic Temporal Abstractions for Reinforcement Learning with Hidden State , 2013, 2013 IEEE 25th International Conference on Tools with Artificial Intelligence.

[14]  Bernhard Hengst,et al.  Discovering Hierarchy in Reinforcement Learning with HEXQ , 2002, ICML.