Transfer learning is one way to close the gap between the apparent speed of human learning and the relatively slow pace of learning by machines. Transfer is doubly beneficial in reinforcement learning where the agent not only needs to generalize from sparse experience, but also needs to efficiently explore. In this paper, we show that the hierarchical Bayesian framework can be readily adapted to sequential decision problems and provides a natural formalization of transfer learning. Using our framework, we produce empirical results in a simple colored maze domain and a complex real-time strategy game. The results show that our Hierarchical Bayesian Transfer framework significantly improves learning speed when tasks are hierarchically related.
[1]
Alan Fern,et al.
Multi-task reinforcement learning: a hierarchical Bayesian approach
,
2007,
ICML '07.
[2]
Peter L. Bartlett,et al.
Experiments with Infinite-Horizon, Policy-Gradient Estimation
,
2001,
J. Artif. Intell. Res..
[3]
Nando de Freitas,et al.
An Introduction to MCMC for Machine Learning
,
2004,
Machine Learning.
[4]
Nando de Freitas,et al.
Bayesian Policy Learning with Trans-Dimensional MCMC
,
2007,
NIPS.
[5]
Alan Fern,et al.
Bayesian role discovery for multi-agent reinforcement learning
,
2010,
AAMAS.