Learning Probabilistic Models: An Expected Utility Maximization Approach

We consider the problem of learning a probabilistic model from the viewpoint of an expected utility maximizing decision maker/investor who would use the model to make decisions (bets), which result in well defined payoffs. In our new approach, we seek good out-of-sample model performance by consider...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of machine learning research 2004-04, Vol.4 (3), p.257-291
Hauptverfasser: Friedman, Craig, Sandow, Sven
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We consider the problem of learning a probabilistic model from the viewpoint of an expected utility maximizing decision maker/investor who would use the model to make decisions (bets), which result in well defined payoffs. In our new approach, we seek good out-of-sample model performance by considering a one-parameter family of Pareto optimal models, which we define in terms of consistency with the training data and consistency with a prior (benchmark) model. We measure the former by means of the large-sample distribution of a vector of sample-averaged features, and the latter by means of a generalized relative entropy. We express each Pareto optimal model as the solution of a strictly convex optimization problem and its strictly concave (and tractable) dual. Each dual problem is a regularized maximization of expected utility over a well-defined family of functions. Each Pareto optimal model is robust: maximizing worst-case outperformance relative to the benchmark model. Finally, we select the Pareto optimal model with maximum (out-of-sample) expected utility. We show that our method reduces to the minimum relative entropy method if and only if the utility function is a member of a three-parameter logarithmic family.
ISSN:1532-4435
DOI:10.1162/153244304773633816