Learning Probabilistic Models: An Expected Utility Maximization Approach
We consider the problem of learning a probabilistic model from the viewpoint of an expected utility maximizing decision maker/investor who would use the model to make decisions (bets), which result in well defined payoffs. In our new approach, we seek good out-of-sample model performance by consider...
Gespeichert in:
Veröffentlicht in: | Journal of machine learning research 2004-04, Vol.4 (3), p.257-291 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 291 |
---|---|
container_issue | 3 |
container_start_page | 257 |
container_title | Journal of machine learning research |
container_volume | 4 |
creator | Friedman, Craig Sandow, Sven |
description | We consider the problem of learning a probabilistic model from the viewpoint of an expected utility maximizing decision maker/investor who would use the model to make decisions (bets), which result in well defined payoffs. In our new approach, we seek good out-of-sample model performance by considering a one-parameter family of Pareto optimal models, which we define in terms of consistency with the training data and consistency with a prior (benchmark) model. We measure the former by means of the large-sample distribution of a vector of sample-averaged features, and the latter by means of a generalized relative entropy. We express each Pareto optimal model as the solution of a strictly convex optimization problem and its strictly concave (and tractable) dual. Each dual problem is a regularized maximization of expected utility over a well-defined family of functions. Each Pareto optimal model is robust: maximizing worst-case outperformance relative to the benchmark model. Finally, we select the Pareto optimal model with maximum (out-of-sample) expected utility. We show that our method reduces to the minimum relative entropy method if and only if the utility function is a member of a three-parameter logarithmic family. |
doi_str_mv | 10.1162/153244304773633816 |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_miscellaneous_28237326</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>28237326</sourcerecordid><originalsourceid>FETCH-LOGICAL-p184t-41ec6433fcf8acd1944cbd49e2e18f0ca4a5775af6ed8f2d0eb8c6f403d3c3f83</originalsourceid><addsrcrecordid>eNotjbFOwzAURT2ARCn8AJMntoDt59guW1QVipQKBjpXjv0MRmkSYlcqfH2DYDrSOdK9hNxwdse5Eve8BCElMKk1KADD1RmZ_cpisuUFuUzpkzGuS6FmZF2jHbvYvdPXsW9sE9uYcnR003ts0wOtOro6DugyerrNU83fdGOPcR9_bI59R6thGHvrPq7IebBtwut_zsn2cfW2XBf1y9PzsqqLgRuZC8nRKQkQXDDWeb6Q0jVeLlAgN4E5K22pdWmDQm-C8Awb41SQDDw4CAbm5PZvd7r9OmDKu31MDtvWdtgf0k4YARqEghNMb09c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>28237326</pqid></control><display><type>article</type><title>Learning Probabilistic Models: An Expected Utility Maximization Approach</title><source>ACM Digital Library Complete</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>EBSCOhost Business Source Complete</source><creator>Friedman, Craig ; Sandow, Sven</creator><creatorcontrib>Friedman, Craig ; Sandow, Sven</creatorcontrib><description>We consider the problem of learning a probabilistic model from the viewpoint of an expected utility maximizing decision maker/investor who would use the model to make decisions (bets), which result in well defined payoffs. In our new approach, we seek good out-of-sample model performance by considering a one-parameter family of Pareto optimal models, which we define in terms of consistency with the training data and consistency with a prior (benchmark) model. We measure the former by means of the large-sample distribution of a vector of sample-averaged features, and the latter by means of a generalized relative entropy. We express each Pareto optimal model as the solution of a strictly convex optimization problem and its strictly concave (and tractable) dual. Each dual problem is a regularized maximization of expected utility over a well-defined family of functions. Each Pareto optimal model is robust: maximizing worst-case outperformance relative to the benchmark model. Finally, we select the Pareto optimal model with maximum (out-of-sample) expected utility. We show that our method reduces to the minimum relative entropy method if and only if the utility function is a member of a three-parameter logarithmic family.</description><identifier>ISSN: 1532-4435</identifier><identifier>DOI: 10.1162/153244304773633816</identifier><language>eng</language><ispartof>Journal of machine learning research, 2004-04, Vol.4 (3), p.257-291</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Friedman, Craig</creatorcontrib><creatorcontrib>Sandow, Sven</creatorcontrib><title>Learning Probabilistic Models: An Expected Utility Maximization Approach</title><title>Journal of machine learning research</title><description>We consider the problem of learning a probabilistic model from the viewpoint of an expected utility maximizing decision maker/investor who would use the model to make decisions (bets), which result in well defined payoffs. In our new approach, we seek good out-of-sample model performance by considering a one-parameter family of Pareto optimal models, which we define in terms of consistency with the training data and consistency with a prior (benchmark) model. We measure the former by means of the large-sample distribution of a vector of sample-averaged features, and the latter by means of a generalized relative entropy. We express each Pareto optimal model as the solution of a strictly convex optimization problem and its strictly concave (and tractable) dual. Each dual problem is a regularized maximization of expected utility over a well-defined family of functions. Each Pareto optimal model is robust: maximizing worst-case outperformance relative to the benchmark model. Finally, we select the Pareto optimal model with maximum (out-of-sample) expected utility. We show that our method reduces to the minimum relative entropy method if and only if the utility function is a member of a three-parameter logarithmic family.</description><issn>1532-4435</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2004</creationdate><recordtype>article</recordtype><recordid>eNotjbFOwzAURT2ARCn8AJMntoDt59guW1QVipQKBjpXjv0MRmkSYlcqfH2DYDrSOdK9hNxwdse5Eve8BCElMKk1KADD1RmZ_cpisuUFuUzpkzGuS6FmZF2jHbvYvdPXsW9sE9uYcnR003ts0wOtOro6DugyerrNU83fdGOPcR9_bI59R6thGHvrPq7IebBtwut_zsn2cfW2XBf1y9PzsqqLgRuZC8nRKQkQXDDWeb6Q0jVeLlAgN4E5K22pdWmDQm-C8Awb41SQDDw4CAbm5PZvd7r9OmDKu31MDtvWdtgf0k4YARqEghNMb09c</recordid><startdate>20040401</startdate><enddate>20040401</enddate><creator>Friedman, Craig</creator><creator>Sandow, Sven</creator><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20040401</creationdate><title>Learning Probabilistic Models: An Expected Utility Maximization Approach</title><author>Friedman, Craig ; Sandow, Sven</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p184t-41ec6433fcf8acd1944cbd49e2e18f0ca4a5775af6ed8f2d0eb8c6f403d3c3f83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2004</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Friedman, Craig</creatorcontrib><creatorcontrib>Sandow, Sven</creatorcontrib><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of machine learning research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Friedman, Craig</au><au>Sandow, Sven</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Probabilistic Models: An Expected Utility Maximization Approach</atitle><jtitle>Journal of machine learning research</jtitle><date>2004-04-01</date><risdate>2004</risdate><volume>4</volume><issue>3</issue><spage>257</spage><epage>291</epage><pages>257-291</pages><issn>1532-4435</issn><abstract>We consider the problem of learning a probabilistic model from the viewpoint of an expected utility maximizing decision maker/investor who would use the model to make decisions (bets), which result in well defined payoffs. In our new approach, we seek good out-of-sample model performance by considering a one-parameter family of Pareto optimal models, which we define in terms of consistency with the training data and consistency with a prior (benchmark) model. We measure the former by means of the large-sample distribution of a vector of sample-averaged features, and the latter by means of a generalized relative entropy. We express each Pareto optimal model as the solution of a strictly convex optimization problem and its strictly concave (and tractable) dual. Each dual problem is a regularized maximization of expected utility over a well-defined family of functions. Each Pareto optimal model is robust: maximizing worst-case outperformance relative to the benchmark model. Finally, we select the Pareto optimal model with maximum (out-of-sample) expected utility. We show that our method reduces to the minimum relative entropy method if and only if the utility function is a member of a three-parameter logarithmic family.</abstract><doi>10.1162/153244304773633816</doi><tpages>35</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1532-4435 |
ispartof | Journal of machine learning research, 2004-04, Vol.4 (3), p.257-291 |
issn | 1532-4435 |
language | eng |
recordid | cdi_proquest_miscellaneous_28237326 |
source | ACM Digital Library Complete; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; EBSCOhost Business Source Complete |
title | Learning Probabilistic Models: An Expected Utility Maximization Approach |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T11%3A01%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Probabilistic%20Models:%20An%20Expected%20Utility%20Maximization%20Approach&rft.jtitle=Journal%20of%20machine%20learning%20research&rft.au=Friedman,%20Craig&rft.date=2004-04-01&rft.volume=4&rft.issue=3&rft.spage=257&rft.epage=291&rft.pages=257-291&rft.issn=1532-4435&rft_id=info:doi/10.1162/153244304773633816&rft_dat=%3Cproquest%3E28237326%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=28237326&rft_id=info:pmid/&rfr_iscdi=true |