Foundations of Multivariate Distributional Reinforcement Learning
In reinforcement learning (RL), the consideration of multivariate reward signals has led to fundamental advancements in multi-objective decision-making, transfer learning, and representation learning. This work introduces the first oracle-free and computationally-tractable algorithms for provably co...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In reinforcement learning (RL), the consideration of multivariate reward
signals has led to fundamental advancements in multi-objective decision-making,
transfer learning, and representation learning. This work introduces the first
oracle-free and computationally-tractable algorithms for provably convergent
multivariate distributional dynamic programming and temporal difference
learning. Our convergence rates match the familiar rates in the scalar reward
setting, and additionally provide new insights into the fidelity of approximate
return distribution representations as a function of the reward dimension.
Surprisingly, when the reward dimension is larger than $1$, we show that
standard analysis of categorical TD learning fails, which we resolve with a
novel projection onto the space of mass-$1$ signed measures. Finally, with the
aid of our technical results and simulations, we identify tradeoffs between
distribution representations that influence the performance of multivariate
distributional RL in practice. |
---|---|
DOI: | 10.48550/arxiv.2409.00328 |