Most work on Predictive Representations of State (PSRs) focuses on learning a complete model of the system that can be used to answer any question about the future. However, we may be interested only in answering certain kinds of abstract questions. For instance, we may only care about the presence of objects in an image rather than pixel level details. In such cases, we may be able to learn substantially smaller models that answer only such abstract questions. We present the framework of PSR homomorphisms for model abstraction in PSRs. A homomorphism transforms a given PSR into a smaller PSR that provides exact answers to abstract questions in the original PSR. As we shall show, this transformation captures structural and temporal abstractions in the original PSR.
[1]
Doina Precup,et al.
Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning
,
1999,
Artif. Intell..
[2]
Balaraman Ravindran,et al.
Model Minimization in Hierarchical Reinforcement Learning
,
2002,
SARA.
[3]
Richard S. Sutton,et al.
Predictive Representations of State
,
2001,
NIPS.
[4]
Balaraman Ravindran,et al.
SMDP Homomorphisms: An Algebraic Approach to Abstraction in Semi-Markov Decision Processes
,
2003,
IJCAI.
[5]
Robert Givan,et al.
Model Minimization in Markov Decision Processes
,
1997,
AAAI/IAAI.
[6]
Arjan van der Schaft,et al.
Bisimulation of Dynamical Systems
,
2004,
HSCC.