Adaptive Pairwise Weights for Temporal Credit Assignment
How much credit (or blame) should an action taken in a state get for a future reward? This is the fundamental temporal credit assignment problem in Reinforcement Learning (RL). One of the earliest and still most widely used heuristics is to assign this credit based on a scalar coefficient, $\lambda$...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | How much credit (or blame) should an action taken in a state get for a future
reward? This is the fundamental temporal credit assignment problem in
Reinforcement Learning (RL). One of the earliest and still most widely used
heuristics is to assign this credit based on a scalar coefficient, $\lambda$
(treated as a hyperparameter), raised to the power of the time interval between
the state-action and the reward. In this empirical paper, we explore heuristics
based on more general pairwise weightings that are functions of the state in
which the action was taken, the state at the time of the reward, as well as the
time interval between the two. Of course it isn't clear what these pairwise
weight functions should be, and because they are too complex to be treated as
hyperparameters we develop a metagradient procedure for learning these weight
functions during the usual RL training of a policy. Our empirical work shows
that it is often possible to learn these pairwise weight functions during
learning of the policy to achieve better performance than competing approaches. |
---|---|
DOI: | 10.48550/arxiv.2102.04999 |