A nonlinear programming model for partially observable Markov decision processses: Finite horizon case
The concept of partially observable Markov decision processes were born to handle the problem of lack of information about the state of a Markov decision process. If the state of the system is unknown to the decision maker then an obvious approach is to gather information that is helpful in selectin...
Gespeichert in:
Veröffentlicht in: | European journal of operational research 1995-11, Vol.86 (3), p.549 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The concept of partially observable Markov decision processes were born to handle the problem of lack of information about the state of a Markov decision process. If the state of the system is unknown to the decision maker then an obvious approach is to gather information that is helpful in selecting an action. This problem was already solved using the Markov processes, but a nonlinear programming model is constructed for the same problem, and a solution algorithm that turns out to be a policy iteration algorithm is developed. The validity of the algorithm is tested using the nonlinear program solver GAMS/MINOS as well as by solving some MDP problems as POMDP problems. The policies found this way are easier to use than the policies found by the existing method, although they have the same optimal objective value. |
---|---|
ISSN: | 0377-2217 1872-6860 |