$A^2T$: Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources

The ability to transfer knowledge from source tasks to a new target task can be very useful in speeding up a Reinforcement Learning agent. Such transfer has been receiving a lot of attention lately, yet the application of transfer poses two serious challenges which have not been adequately addressed. First, the agent should be able to avoid negative transfer, which happens when the transfer hampers or slows down the learning instead of helping it. Second, the agent should be able to do selective transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task. We propose AT (Attend, Adapt and Transfer), an attentive deep architecture for adaptive transfer, which addresses these challenges. AT is generic enough to effect transfer of either policies or value functions. Empirical evaluations on different learning algorithms show that AT is an effective architecture for transfer learning by being able to avoid negative transfer while transferring selectively from multiple sources.

[1]  S. Mahadevan,et al.  Proto-transfer Learning in Markov Decision Processes Using Spectral Methods , 2006 .

[2]  Andrew G. Barto,et al.  Transfer in Reinforcement Learning via Shared Features , 2012, J. Mach. Learn. Res..

[3]  John N. Tsitsiklis,et al.  Actor-Critic Algorithms , 1999, NIPS.

[4]  Stefan Schaal,et al.  Robot Learning From Demonstration , 1997, ICML.

[5]  Bikramjit Banerjee,et al.  General Game Learning Using Knowledge Transfer , 2007, IJCAI.

[6]  Manuela M. Veloso,et al.  Probabilistic policy reuse in a reinforcement learning agent , 2006, AAMAS '06.

[7]  Marc G. Bellemare,et al.  The Arcade Learning Environment: An Evaluation Platform for General Agents , 2012, J. Artif. Intell. Res..

[8]  Lihong Li,et al.  PAC-inspired Option Discovery in Lifelong Reinforcement Learning , 2014, ICML.

[9]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[10]  Satinder P. Singh,et al.  Transfer via soft homomorphisms , 2009, AAMAS.

[11]  Scott Niekum,et al.  Incremental Semantically Grounded Learning from Demonstration , 2013, Robotics: Science and Systems.

[12]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[13]  Razvan Pascanu,et al.  Progressive Neural Networks , 2016, ArXiv.

[14]  Long-Ji Lin,et al.  Reinforcement learning for robots using neural networks , 1992 .

[15]  Alex Graves,et al.  Recurrent Models of Visual Attention , 2014, NIPS.

[16]  Alex Graves,et al.  Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.

[17]  Alessandro Lazaric,et al.  Transfer from Multiple MDPs , 2011, NIPS.

[18]  Shie Mannor,et al.  Dynamic abstraction in reinforcement learning via clustering , 2004, ICML.

[19]  Ruslan Salakhutdinov,et al.  Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning , 2015, ICLR.