How much credit (or blame) should an action taken in a state get for a future reward? This is the fundamental temporal credit assignment problem in Reinforcement Learning (RL). One of the earliest and still most widely used heuristics is to assign this credit based on a scalar coefficient λ (treated as a hyperparameter) raised to the power of the time interval between the state-action and the reward. In this empirical paper, we explore heuristics based on more general pairwise weightings that are functions of the state in which the action was taken, the state at the time of the reward, as well as the time interval between the two. Of course it isn’t clear what these pairwise weight functions should be, and because they are too complex to be treated as hyperparameters we develop a metagradient procedure for learning these weight functions during the usual RL training of a policy. Our empirical work shows that it is often possible to learn these pairwise weight functions during learning of the policy to achieve better performance than competing approaches.
[1]
Junhyuk Oh,et al.
Discovering Reinforcement Learning Algorithms
,
2020,
NeurIPS.
[2]
Honglak Lee,et al.
How Should an Agent Practice?
,
2019,
AAAI.
[3]
Doina Precup,et al.
Hindsight Credit Assignment
,
2019,
NeurIPS.
[4]
Yan Wu,et al.
Optimizing agent behavior over long time scales by transporting value
,
2018,
Nature Communications.
[5]
Sepp Hochreiter,et al.
RUDDER: Return Decomposition for Delayed Rewards
,
2018,
NeurIPS.
[6]
Aaron C. Courville,et al.
FiLM: Visual Reasoning with a General Conditioning Layer
,
2017,
AAAI.
[7]
Alex Graves,et al.
Asynchronous Methods for Deep Reinforcement Learning
,
2016,
ICML.
[8]
Sergey Levine,et al.
High-Dimensional Continuous Control Using Generalized Advantage Estimation
,
2015,
ICLR.
[9]
Shane Legg,et al.
Human-level control through deep reinforcement learning
,
2015,
Nature.
[10]
Jürgen Schmidhuber,et al.
Long Short-Term Memory
,
1997,
Neural Computation.