A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation

This study presents a systematic comparison of methods for individual treatment assignment, a general problem that arises in many applications and has received significant attention from economists, computer scientists, and social scientists. We group the various methods proposed in the literature i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Fernández-Loría, Carlos, Provost, Foster, Anderton, Jesse, Carterette, Benjamin, Chandar, Praveen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Fernández-Loría, Carlos
Provost, Foster
Anderton, Jesse
Carterette, Benjamin
Chandar, Praveen
description This study presents a systematic comparison of methods for individual treatment assignment, a general problem that arises in many applications and has received significant attention from economists, computer scientists, and social scientists. We group the various methods proposed in the literature into three general classes of algorithms (or metalearners): learning models to predict outcomes (the O-learner), learning models to predict causal effects (the E-learner), and learning models to predict optimal treatment assignments (the A-learner). We compare the metalearners in terms of (1) their level of generality and (2) the objective function they use to learn models from data; we then discuss the implications that these characteristics have for modeling and decision making. Notably, we demonstrate analytically and empirically that optimizing for the prediction of outcomes or causal effects is not the same as optimizing for treatment assignments, suggesting that in general the A-learner should lead to better treatment assignments than the other metalearners. We demonstrate the practical implications of our findings in the context of choosing, for each user, the best algorithm for playlist generation in order to optimize engagement. This is the first comparison of the three different metalearners on a real-world application at scale (based on more than half a billion individual treatment assignments). In addition to supporting our analytical findings, the results show how large A/B tests can provide substantial value for learning treatment assignment policies, rather than simply choosing the variant that performs best on average.
doi_str_mv 10.48550/arxiv.2004.11532
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2004_11532</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2004_11532</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-effd13af92404b0878cf094b3cf671f8cd8b936cc0f7bfa967381cc3084c73e53</originalsourceid><addsrcrecordid>eNotj8FOhDAURbtxYWb8AFf2B8CWFlqWhOhoMkYX7FyQR-mbaQKUtI06f6-iq3tzk3OTQ8gtZ7nUZcnuIXy5j7xgTOacl6K4Ju8Nbf28QnDRL9QjfbHp7MdI0QfaBQtptkuiTYzutGz106UzhYU26zo5A8n9cMnTtwkuk4uJHuxiwzbvyRXCFO3Nf-5I9_jQtU_Z8fXw3DbHDCpVZBZx5AKwLiSTA9NKG2S1HITBSnHUZtRDLSpjGKoBoa6U0NwYwbQ0SthS7Mjd3-1m16_BzRAu_a9lv1mKbzVyThQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation</title><source>arXiv.org</source><creator>Fernández-Loría, Carlos ; Provost, Foster ; Anderton, Jesse ; Carterette, Benjamin ; Chandar, Praveen</creator><creatorcontrib>Fernández-Loría, Carlos ; Provost, Foster ; Anderton, Jesse ; Carterette, Benjamin ; Chandar, Praveen</creatorcontrib><description>This study presents a systematic comparison of methods for individual treatment assignment, a general problem that arises in many applications and has received significant attention from economists, computer scientists, and social scientists. We group the various methods proposed in the literature into three general classes of algorithms (or metalearners): learning models to predict outcomes (the O-learner), learning models to predict causal effects (the E-learner), and learning models to predict optimal treatment assignments (the A-learner). We compare the metalearners in terms of (1) their level of generality and (2) the objective function they use to learn models from data; we then discuss the implications that these characteristics have for modeling and decision making. Notably, we demonstrate analytically and empirically that optimizing for the prediction of outcomes or causal effects is not the same as optimizing for treatment assignments, suggesting that in general the A-learner should lead to better treatment assignments than the other metalearners. We demonstrate the practical implications of our findings in the context of choosing, for each user, the best algorithm for playlist generation in order to optimize engagement. This is the first comparison of the three different metalearners on a real-world application at scale (based on more than half a billion individual treatment assignments). In addition to supporting our analytical findings, the results show how large A/B tests can provide substantial value for learning treatment assignment policies, rather than simply choosing the variant that performs best on average.</description><identifier>DOI: 10.48550/arxiv.2004.11532</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning ; Statistics - Methodology</subject><creationdate>2020-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2004.11532$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2004.11532$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Fernández-Loría, Carlos</creatorcontrib><creatorcontrib>Provost, Foster</creatorcontrib><creatorcontrib>Anderton, Jesse</creatorcontrib><creatorcontrib>Carterette, Benjamin</creatorcontrib><creatorcontrib>Chandar, Praveen</creatorcontrib><title>A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation</title><description>This study presents a systematic comparison of methods for individual treatment assignment, a general problem that arises in many applications and has received significant attention from economists, computer scientists, and social scientists. We group the various methods proposed in the literature into three general classes of algorithms (or metalearners): learning models to predict outcomes (the O-learner), learning models to predict causal effects (the E-learner), and learning models to predict optimal treatment assignments (the A-learner). We compare the metalearners in terms of (1) their level of generality and (2) the objective function they use to learn models from data; we then discuss the implications that these characteristics have for modeling and decision making. Notably, we demonstrate analytically and empirically that optimizing for the prediction of outcomes or causal effects is not the same as optimizing for treatment assignments, suggesting that in general the A-learner should lead to better treatment assignments than the other metalearners. We demonstrate the practical implications of our findings in the context of choosing, for each user, the best algorithm for playlist generation in order to optimize engagement. This is the first comparison of the three different metalearners on a real-world application at scale (based on more than half a billion individual treatment assignments). In addition to supporting our analytical findings, the results show how large A/B tests can provide substantial value for learning treatment assignment policies, rather than simply choosing the variant that performs best on average.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><subject>Statistics - Methodology</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOhDAURbtxYWb8AFf2B8CWFlqWhOhoMkYX7FyQR-mbaQKUtI06f6-iq3tzk3OTQ8gtZ7nUZcnuIXy5j7xgTOacl6K4Ju8Nbf28QnDRL9QjfbHp7MdI0QfaBQtptkuiTYzutGz106UzhYU26zo5A8n9cMnTtwkuk4uJHuxiwzbvyRXCFO3Nf-5I9_jQtU_Z8fXw3DbHDCpVZBZx5AKwLiSTA9NKG2S1HITBSnHUZtRDLSpjGKoBoa6U0NwYwbQ0SthS7Mjd3-1m16_BzRAu_a9lv1mKbzVyThQ</recordid><startdate>20200424</startdate><enddate>20200424</enddate><creator>Fernández-Loría, Carlos</creator><creator>Provost, Foster</creator><creator>Anderton, Jesse</creator><creator>Carterette, Benjamin</creator><creator>Chandar, Praveen</creator><scope>ADEOX</scope><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200424</creationdate><title>A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation</title><author>Fernández-Loría, Carlos ; Provost, Foster ; Anderton, Jesse ; Carterette, Benjamin ; Chandar, Praveen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-effd13af92404b0878cf094b3cf671f8cd8b936cc0f7bfa967381cc3084c73e53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><topic>Statistics - Methodology</topic><toplevel>online_resources</toplevel><creatorcontrib>Fernández-Loría, Carlos</creatorcontrib><creatorcontrib>Provost, Foster</creatorcontrib><creatorcontrib>Anderton, Jesse</creatorcontrib><creatorcontrib>Carterette, Benjamin</creatorcontrib><creatorcontrib>Chandar, Praveen</creatorcontrib><collection>arXiv Economics</collection><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fernández-Loría, Carlos</au><au>Provost, Foster</au><au>Anderton, Jesse</au><au>Carterette, Benjamin</au><au>Chandar, Praveen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation</atitle><date>2020-04-24</date><risdate>2020</risdate><abstract>This study presents a systematic comparison of methods for individual treatment assignment, a general problem that arises in many applications and has received significant attention from economists, computer scientists, and social scientists. We group the various methods proposed in the literature into three general classes of algorithms (or metalearners): learning models to predict outcomes (the O-learner), learning models to predict causal effects (the E-learner), and learning models to predict optimal treatment assignments (the A-learner). We compare the metalearners in terms of (1) their level of generality and (2) the objective function they use to learn models from data; we then discuss the implications that these characteristics have for modeling and decision making. Notably, we demonstrate analytically and empirically that optimizing for the prediction of outcomes or causal effects is not the same as optimizing for treatment assignments, suggesting that in general the A-learner should lead to better treatment assignments than the other metalearners. We demonstrate the practical implications of our findings in the context of choosing, for each user, the best algorithm for playlist generation in order to optimize engagement. This is the first comparison of the three different metalearners on a real-world application at scale (based on more than half a billion individual treatment assignments). In addition to supporting our analytical findings, the results show how large A/B tests can provide substantial value for learning treatment assignment policies, rather than simply choosing the variant that performs best on average.</abstract><doi>10.48550/arxiv.2004.11532</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2004.11532
ispartof
issn
language eng
recordid cdi_arxiv_primary_2004_11532
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
Statistics - Methodology
title A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T05%3A36%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Comparison%20of%20Methods%20for%20Treatment%20Assignment%20with%20an%20Application%20to%20Playlist%20Generation&rft.au=Fern%C3%A1ndez-Lor%C3%ADa,%20Carlos&rft.date=2020-04-24&rft_id=info:doi/10.48550/arxiv.2004.11532&rft_dat=%3Carxiv_GOX%3E2004_11532%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true