Improving Policy Functions in High-Dimensional Dynamic Games

In this paper, we propose a method for finding policy function improvements for a single agent in high-dimensional Markov dynamic optimization problems, focusing in particular on dynamic games. Our approach combines ideas from literatures in Machine Learning and the econometric analysis of games to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:NBER Working Paper Series 2015-04, p.21124
Hauptverfasser: Bajari, Patrick L, Jiang, Ying, Manzanares, Carlos A
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we propose a method for finding policy function improvements for a single agent in high-dimensional Markov dynamic optimization problems, focusing in particular on dynamic games. Our approach combines ideas from literatures in Machine Learning and the econometric analysis of games to derive a one-step improvement policy over any given benchmark policy. In order to reduce the dimensionality of the game, our method selects a parsimonious subset of state variables in a data-driven manner using a Machine Learning estimator. This one-step improvement policy can in turn be improved upon until a suitable stopping rule is met as in the classical policy function iteration approach. We illustrate our algorithm in a high-dimensional entry game similar to that studied by Holmes (2011) and show that it results in a nearly 300 percent improvement in expected profits as compared with a benchmark policy.
ISSN:0898-2937
DOI:10.3386/w21124