Risk-sensitive control of Markov decision processes: A moment-based approach with target distributions
•We present a multi-valued dynamic programming approach which allows to control the moments of the distributions of future rewards.•We exploit recursive computations of higher-order moments of future rewards associated to a given feedback policy.•We propose a heuristic self-tuning algorithm which al...
Gespeichert in:
Veröffentlicht in: | Computers & operations research 2020-11, Vol.123, p.104997, Article 104997 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •We present a multi-valued dynamic programming approach which allows to control the moments of the distributions of future rewards.•We exploit recursive computations of higher-order moments of future rewards associated to a given feedback policy.•We propose a heuristic self-tuning algorithm which allows identifying feedback policies that approximate a predetermined (risk-sensitive) target distribution.•Our approach can be generally applied, is easy to implement, and does not require an extension of the state space.•We demonstrate the quality and flexibility of our approach for dynamic pricing scenarios.
In many revenue management applications risk-averse decision-making is crucial. In dynamic settings, however, it is challenging to find the right balance between maximizing expected rewards and minimizing various kinds of risk. In existing approaches utility functions, chance constraints, or (conditional) value at risk considerations are used to influence the distribution of rewards in a preferred way. Nevertheless, common techniques are not flexible enough and typically numerically complex. In our model, we exploit the fact that a distribution is characterized by its mean and higher moments. We present a multi-valued dynamic programming heuristic to compute risk-sensitive feedback policies that are able to directly control the moments of future rewards. Our approach is based on recursive formulations of higher moments and does not require an extension of the state space. Finally, we propose a self-tuning algorithm, which allows to identify feedback policies that approximate predetermined (risk-sensitive) target distributions. We illustrate the effectiveness and the flexibility of our approach for different dynamic pricing scenarios. |
---|---|
ISSN: | 0305-0548 0305-0548 |
DOI: | 10.1016/j.cor.2020.104997 |