Learning in environments with large state and action spaces, and sparse rewards, can hinder a Reinforcement Learning (RL) agent's learning through trial-and-error. For instance, following natural language instructions on the Web (such as booking a flight ticket) leads to RL settings where input vocabulary and number of actionable elements on a page can grow very large. Even though recent approaches improve the success rate on relatively simple environments with the help of human demonstrations to guide the exploration, they still fail in environments where the set of possible instructions can reach millions. We approach the aforementioned problems from a different perspective and propose guided RL approaches that can generate unbounded amount of experience for an agent to learn from. Instead of learning from a complicated instruction with a large vocabulary, we decompose it into multiple sub-instructions and schedule a curriculum in which an agent is tasked with a gradually increasing subset of these relatively easier sub-instructions. In addition, when the expert demonstrations are not available, we propose a novel meta-learning framework that generates new instruction following tasks and trains the agent more effectively. We train DQN, deep reinforcement learning agent, with Q-value function approximated with a novel QWeb neural network architecture on these smaller, synthetic instructions. We evaluate the ability of our agent to generalize to new instructions on World of Bits benchmark, on forms with up to 100 elements, supporting 14 million possible instructions. The QWeb agent outperforms the baseline without using any human demonstration achieving 100% success rate on several difficult environments.
[1]
Percy Liang,et al.
Reinforcement Learning on Web Interfaces Using Workflow-Guided Exploration
,
2018,
ICLR.
[2]
Andrew Y. Ng,et al.
Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping
,
1999,
ICML.
[3]
Pieter Abbeel,et al.
Meta Learning Shared Hierarchies
,
2017,
ICLR.
[4]
Alex Graves,et al.
Automated Curriculum Learning for Neural Networks
,
2017,
ICML.
[5]
Shane Legg,et al.
Human-level control through deep reinforcement learning
,
2015,
Nature.
[6]
Peter L. Bartlett,et al.
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
,
2016,
ArXiv.
[7]
Zeb Kurth-Nelson,et al.
Learning to reinforcement learn
,
2016,
CogSci.
[8]
Jason Weston,et al.
Curriculum learning
,
2009,
ICML '09.
[9]
Geoffrey E. Hinton,et al.
Grammar as a Foreign Language
,
2014,
NIPS.
[10]
Dilek Z. Hakkani-Tür,et al.
FollowNet: Robot Navigation by Following Natural Language Directions with Deep Reinforcement Learning
,
2018,
ArXiv.
[11]
Pieter Abbeel,et al.
Apprenticeship learning via inverse reinforcement learning
,
2004,
ICML.
[12]
Wojciech Zaremba,et al.
Learning to Execute
,
2014,
ArXiv.
[13]
Pieter Abbeel,et al.
Reverse Curriculum Generation for Reinforcement Learning
,
2017,
CoRL.
[14]
Percy Liang,et al.
World of Bits: An Open-Domain Platform for Web-Based Agents
,
2017,
ICML.
[15]
Marcin Andrychowicz,et al.
Learning to learn by gradient descent by gradient descent
,
2016,
NIPS.