On Dynamic Programming Decompositions of Static Risk Measures in Markov Decision Processes
Advances in Neural Information Processing Systems (Neurips), 2023 Optimizing static risk-averse objectives in Markov decision processes is difficult because they do not admit standard dynamic programming equations common in Reinforcement Learning (RL) algorithms. Dynamic programming decompositions t...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Advances in Neural Information Processing Systems (Neurips), 2023 Optimizing static risk-averse objectives in Markov decision processes is
difficult because they do not admit standard dynamic programming equations
common in Reinforcement Learning (RL) algorithms. Dynamic programming
decompositions that augment the state space with discrete risk levels have
recently gained popularity in the RL community. Prior work has shown that these
decompositions are optimal when the risk level is discretized sufficiently.
However, we show that these popular decompositions for
Conditional-Value-at-Risk (CVaR) and Entropic-Value-at-Risk (EVaR) are
inherently suboptimal regardless of the discretization level. In particular, we
show that a saddle point property assumed to hold in prior literature may be
violated. However, a decomposition does hold for Value-at-Risk and our proof
demonstrates how this risk measure differs from CVaR and EVaR. Our findings are
significant because risk-averse algorithms are used in high-stake environments,
making their correctness much more critical. |
---|---|
DOI: | 10.48550/arxiv.2304.12477 |